Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

6 ways we’re making Azure reservations even more powerful

$
0
0

Our newest Azure reservations features can help you save more on your Azure costs, easily manage reservations, and create internal reports. Based on your feedback, we’ve added the following features to Azure reservations:

 

Azure Databricks pre-purchase plans

You can now save up to 37 percent on your Azure Databricks costs when you pre-purchase Azure Databricks commit units (DBCU) for one or three years. Any Azure Databricks use deducts from the pre-purchased DBUCs automatically. You can use the pre-purchased DBCUs at any time during the purchase term.

Databricks SKU selection

See our documentation "Optimize Azure Databricks costs with a pre-purchase" to learn more, or purchase an Azure Databricks plan in the Azure portal.

 

AppService Isolated stamp fee reservations

Save up to 40 percent on your AppService Isolated stamp fee costs with AppService reserved capacity. After you purchase a reservation, the isolated stamp fee usage that matches the reservation is no longer charged at the on-demand rates. AppService workers are charged separately and don’t get reservation discount.

App Service Reserved Capacity

Visit our documentation "Prepay for Azure App Service Isolated Stamp Fee with reserved capacity" to learn more or purchase a reservation in the Azure portal.

 

Automatically renew your reservations

Now you can setup your reservations to renew automatically. This ensures that you keep getting the reservation discounts without any gaps. You can opt-in to automatically renew your reservations anytime during the term of the reservation and opt-out anytime. You can also update the renewal quantity to better align with any changes in your usage pattern. To setup automatic renewal, just go to any reservation that you’ve already purchased and click on the Renewal tab.

Renewal setup

 

Scope reservation to resource group

You can now scope reservations to a resource group. This feature is helpful in scenarios where same subscription has deployments from multiple cost centers, represented by their respective resource groups, and the reservation is purchased for a particular cost center. This feature helps you narrow down the reservation application to a resource group making internal charge-back easier. You can scope a reservation to a resource group at the time of purchase or update the scope after purchase. If you delete or migrate a resource group then the reservation will have to be rescoped manually.

Resource group scope

Learn more in our documentation "Scope reservations."

 

Enhanced usage data to help with charge back, savings, and utilization

Organizations rely on their enterprise agreement (EA) usage data to reconcile invoices, track usage, and charge back internally. We recently added more details to the EA usage data to make your reservation reporting easier. With these changes you can easily perform following tasks:

  • Get reservation purchase and refund charges
  • Know which resource consumed how many hours of a reservation and charge back data for the usage
  • Know how many hours of a reservation was not used
  • Amortize reservation costs
  • Calculate reservation savings

The new data files are available only through Azure portal and not through the EA portal. Besides the raw data, now you can also see reservations in Cost Analysis.

You can visit our documentation "Get Enterprise Agreement reservation costs and usage" to learn more.

 

Purchase using API

You can now purchase azure reservation using REST APIs. The APIs below will help you get the SKUs, calculate the cost, and then make the purchase:


Speed up your Windows 10 development with new FastTrack guidance

Guest Post: How Maplytics uses Spatial Math Module for emergency response by locating affected areas

$
0
0

Inogic, a Microsoft ISV for Dynamics 365 CRM, specializes in developing apps for Dynamics 365 that also run on PowerApps. Their flagship product Maplytics is a geo-analytical app Certified for Microsoft Dynamics (CfMD). Maplytics is powered by Bing Maps to provide location insights for Dynamics 365 data and helps businesses do an in-depth analysis of their prospects and clients geographically. It is rated as a preferred solution on Microsoft AppSource.

“Partnering with Bing Maps as the geo-analytical platform has helped us offer the most up to date technology and features with Maplytics and offer fitting solutions to businesses across domains like Finance, Insurance, Health, Manufacturing, Agriculture amongst other industries to accelerate their business processes and get the most of their Dynamics 365 implementation,” said Roohi Shaikh, CEO of Inogic.

Below is a recent use case where Maplytics helped meet a challenging business requirement using Bing Maps Services.

Non-Profit Tasked with Securing the Safety of Students

A non-profit organization ensures the safety and security of school students in a certain region. It safeguards the students from abductions, bushfires, suspicious activity in nearby areas or any other threats by instantly alerting nearby schools. After receiving an alert, the school’s administration sends a team to the impacted location or takes preventive measures required to minimize the hazard. This process results in increased credibility of the education system and improved security for the students, allowing the schools in the region to be proactive upon receiving any alert from the non-profit organization. Also, it reduces the cost of disaster management plans and enables the schools to perform effectively.

In one example scenario, the non-profit organization discovers a fire has broken out at an oil factory in the vicinity of several schools. The sources of the non-profit organization deployed in the region immediately alert the schools within 2, 5 and 10 mile radiuses of the impacted area. They create a High Alert, Low Alert and an Informative Alert for the schools to take necessary action based on their distance from the fire. These alerts were conveyed to the schools via phone call and email.

Solution

Maplytics uses Spatial Math Module offered by the Bing Maps API to perform a proximity search to locate the schools in the requested proximity from Dynamics 365 CE and display them on the map in addition to listing them out with additional details. This helps the non-profit to take proactive actions like sending an email or creating a phone call activity to be assigned to a representative for follow-up with each school.

Maplytics makes use of multiple modules and the REST API services that are available in Bing Maps as part of this solution.

Spatial Math Module offers various operations to perform spatial calculations. Maplytics uses it to calculate the distance between two locations, in this case the center point of the location where the incident occurred, such as a fire break out, and includes the addresses of the schools stored in the Dynamics 365 CE Application.

To present any data on the map or to perform any geospatial operation, it is important to have the geo-coordinates (latitude/longitude) of the location. For this, the addresses of the schools are already geocoded using the Bing Map REST Services API.

e.g. http://dev.virtualearth.net/REST/v1/Locations/US/WA/98052/Redmond/1%20Microsoft%20Way? key={BingMapsKey}

We ship a workflow that is executed automatically when a new school is added to the system or the address of a school is changed. Next, we get the geo-coordinates of the incident location to calculate the distance from that point. For this, we make use of Search Module.

Microsoft.Maps.loadModule('Microsoft.Maps.Search', function () {
                var searchManager = new Microsoft.Maps.Search.SearchManager(map);
                var requestOptions = {
                    bounds: map.getBounds(),
                    where: 'Seattle',
                    callback: function (answer, userData) {
                        map.setView({ bounds: answer.results[0].bestView });
                        map.entities.push(new Microsoft.Maps.Pushpin(answer.results[0].location));
                    }
                };
                searchManager.geocode(requestOptions);
            });

The above snippet of code plots the incident location on the map.

Using the mathematical formulae available, a collection of geo-coordinates are obtained to draw the 2, 5, 10 mile radius. See screenshot below:

Maplytics screenshot

Finally, we plot the schools on the map that are within 2, 5, 10 miles from the incident. Here the Spatial Math Module comes to the rescue.

Microsoft.Maps.SpatialMath.getDistanceTo(pushpins[0].getLocation(), pushpins[1].getLocation(), Microsoft.Maps.SpatialMath.DistanceUnits.Miles);

In the above function, we need to provide the geo-coordinates of the 2 locations. This function supports returning of the distance in both the metric system (km) and the imperial system (miles).

The comparison results that fall within the requested 10-mile radius are then displayed on the map as shown below:

Maplytics Screenshot - Plot Records

With this kind of visualization on the map, the system can immediately identify the schools that need to be notified of the incident. The related school information can be quickly retrieved by clicking on the specific pushpin.

Maplytics Screenshot - Little Harbor School

From here a phone call activity can be created in CRM and assigned to another department for personal follow-up with the school. At the same time, an email can be sent out to all the schools in one go using the Mass Email option available in Maplytics:

Maplytics Screenshot - Mass Email

You can see that in a matter of a few seconds Maplytics meets the needs required in a challenging situation. With the help of the powerful services available through Bing Maps API, we were able to identify the schools in the vicinity and create actionable items for the team in the form of emails and phone calls.

Further using Bing Maps Services, Maplytics has been able to develop features like optimized routing with turn-by-turn navigation to various apps, defining and aligning sales territories with Territory Management, performing Radius Search in proximities, planning and scheduling of appointments with check-in timestamp and determining Area of Service.

To learn more about Maplytics, go to https://www.maplytics.com/.

You can get more information about Bing Maps at https://www.microsoft.com/maps.

– Maplytics Team

AI, Machine Learning and Data Science Roundup: July/August 2019

$
0
0

A mostly monthly roundup of news about Artificial Intelligence, Machine Learning and Data Science. This is an eclectic collection of interesting blog posts, software announcements and data applications from Microsoft and elsewhere that I've noted over the past month or so.

Open Source AI, ML & Data Science News

StanfordNLP: a pure-Python package for grammatical analysis of sentences in over 50 languages.

New projects added to the PyTorch ecosystem: Skorch (scikit-learn compatibility), botorch (Bayesian optimization), and many others.

PyTorch-Transformers, a library of pretrained NLP models (BERT, GPT-2 and more) from HuggingFace.

R 3.6.1 is released.

Announcing mlr3, a new machine-learning framework for R.

Dash for R: a framework for building interactive web applications on both Python and R models.

R now supports the Apache Arrow data interchange format via the arrow package on CRAN.

A Nature article about Julia, the scientific computing language.

Industry News

Ethical Aspects of Autonomous and Intelligent Systems,  the position statement on AI ethics from the IEEE.

ParlAI, a Python framework and repository of data sets and reference models which Facebook is using to address weaknesses in chatbots today.

Facebook releases ELI5, a set of scripts to download paired questions and answers from the ELI5 subreddit, along with models for question-answering research.

Facebook open sources CraftAssist, a platform for creating AI assistants that can manipulate the Minecraft world via chat conversations.

Lyft releases the Level 5 Dataset, the largest publicly-released dataset for autonomous driving models.

Uber releases the Plato Research Dialogue System, a framework for creating, training, and evaluating conversational AI agents.

Uber releases Ludwig v0.2. This update to the code-free deep learning toolbox adds support for Comet.ml and BERT models, and a deployment server.

EfficientNet-EdgeTPU, a family of image classification models optimized to run on Google's low-power Edge TPU  chips.

The What-If Tool is now available within GCP AI Platform, to visualize the impact of variables on Tensorflow model outputs.

IBM introduces AI Explainability 360, a suite of open-source tools for machine learning interpretability.

Microsoft News

Microsoft invests $1B in Open AI to spur research in general artificial intelligence.

AI for Cultural Heritage, a new pillar in Microsoft's $125M AI for Good portfolio.

VS Code adds support for debugging Python cells in Jupyter Notebooks.

Azure Form Recognizer (preview) adds the ability to extract structured data from scanned receipts.

Cognitive Services Text Analytics can now provide sentiment scores for sentences and entire documents, and with improved accuracy.

Video Indexer now allows the language model to be customized, so that specialized vocabulary (jargon) can be automatically detected and transcribed.

Microsoft Machine Learning Server 9.4 is now available, with updates to the open-source engines for operationalizing R and Python.

AzureR, a family of packages for creating, managing and monitoring Azure services from the R language.

Azure Machine Learning Services adds a command-line interface complementing the existing Python SDK, to support MLOps workflows via the CLI.

Microsoft open-sources scripts and notebooks to pre-train and finetune the BERT natural language model with domain-specific texts.

Learning resources

A beginner's introduction to recurrent neural networks from Victor Zhou, with a from-scratch implementation of a sentiment analysis RNN in Python.

Two new, free courses from fast.ai: Deep Learning from the Foundations and A Code-First Introduction to Natural Language Processing.

Statistics with Julia, a draft (but comprehensive) book on statistical analysis in the Julia language.

Keynote presentations from the useR!2019 conference. Contributed presentations from the annual R conference coming soon.

Model interpretability with automated machine learning in Azure Machine Learning service.

Tutorial: Deploying Azure ML Service models to Azure Functions for inference.

A comparison of pre-trained text-recognition services from Amazon, Google and Microsoft.

What's the difference between deep learning, machine learning, and AI?

A tutorial on pre-training BERT models with Google Cloud TPUs.

Debugging ML models: dealing with information leakage and data bias.

Applications

Stitch Fix has built a centralized experimentation platform, to standardize and automate the process of A/B testing.

IMT Atlantique, a Microsoft AI for Earth grantee, uses satellite remote sensing data to monitor changes in ocean levels.

How AI is helping track endangered species.

The Climate Modeling Alliance uses Julia and CUDA to simulate climate models.

Parrotron: a Google Research project to facilitate understanding of speech-impaired people by devices and other people.

Find previous editions of the monthly AI roundup here.

Customize Control Information for full metadata requests in odata.net

$
0
0

Background

OData supports three metadata levels for the JSON format, namely –

  1. No Metadata
  2. Minimal Metadata
  3. Full Metadata

No Metadata as the name suggests does not include control information other than nextLink and count. Minimal metadata responses usually include context, etag, deltalink etc. whereas full metadata responses contain all the control information like navigationLink, assciationLink, readLink, mediaReadLink etc.

ODataMetadataSelector

With odata.net >=7.6.1, it will be possible to customize what metadata gets written out. A new class, ODataMetadataSelector is introduced which exposes methods that can be overridden to add or remove control information related to the navigation properties, stream properties, and operations.

 

public virtual IEnumerable<IEdmOperation> SelectBindableOperations(IEdmStructuredType type, IEnumerable<IEdmOperation> bindableOperations)
{
    // Select bindable operations
    return bindableOperations;
}

public virtual IEnumerable<IEdmNavigationProperty> SelectNavigationProperties(IEdmStructuredType type, IEnumerable<IEdmNavigationProperty> navigationProperties)
{
   // Select the navigation properties from the type
   return navigationProperties;
}

public virtual IEnumerable<IEdmStructuralProperty> SelectStreamProperties(IEdmStructuredType type, IEnumerable<IEdmStructuralProperty> selectedStreamProperties)
{
   // Select the streamProperties from the type
   return selectedStreamProperties;
}

The instance of this class can be set on ODataMessageWriterSettings’s property, MetadataSelector. Since ODataMessageWriterSettings can be dependency injected, you can also use dependency injection for ODataMetadataSelector by injecting a custom ODataMessageWriterSettings.

Since this will get called for every resource being requested, it is highly recommended to cache the results if possible.

Usage Example

You can choose to annotate properties of types for which you don’t want to write out the metadata. This sample here will omit control information for all such navigation properties. You can also have arbitrary criteria to select the metadata such as omitting all actions.

public class SampleMetadataSelector : ODataMetadataSelector
    {
        private IEdmModel model;

        private IDictionary<IEdmVocabularyAnnotatable, bool> annotationCache;
        
        public SampleMetadataSelector(IEdmModel model, IDictionary<IEdmVocabularyAnnotatable, bool> cache)
        {
            this.model = model;
            this.annotationCache = isHiddenDictionary;
        }

        public override IEnumerable<IEdmNavigationProperty> SelectNavigationProperties(IEdmStructuredType type, IEnumerable<IEdmNavigationProperty> navigationProperties)
        {
            return navigationProperties.Where(s =>
                {
                    if (!isHiddenCache.ContainsKey(s))
                    {
                        IEdmVocabularyAnnotation annotation = model.FindVocabularyAnnotations<IEdmVocabularyAnnotation>(s, "AnnotationName").FirstOrDefault();
                        annotationCache[s] = annotation == null;
                    }

                    return annotationCache[s];
                }
            );
        }

        public override  IEnumerable<IEdmOperation> SelectBindableOperations(IEdmStructuredType type, IEnumerable<IEdmOperation> bindableOperations)
        {
            //Omit all actions
            return bindableOperations.Where(f => f.SchemaElementKind != EdmSchemaElementKind.Action);
        }
    }

 

The post Customize Control Information for full metadata requests in odata.net appeared first on OData.

Azure SDK August 2019 preview and a dive into consistency

$
0
0

The second previews of Azure SDKs which follow the latest Azure API Guidelines and Patterns are now available (.Net, Java, JavaScript, Python). These previews contain bug fixes, new features, and additional work towards guidelines adherence.

What’s New

The SDKs have many new features, bug fixes, and improvements. Some of the new features are below, but please read the release notes linked above and changelogs for details.

  • Storage Libraries for Java now include Files and Queues support.
  • Storage Libraries for Python have added Async versions of the APIs for Files, Queues, and Blobs.
  • Event Hubs libraries across languages have expanded support for sending multiple messages in a single call by adding the ability to create a batch avoiding the error scenario where a call exceeds size limits and giving batch size control to developers with bandwidth concerns.
  • Event Hubs libraries across languages have introduced a new model for consuming events via the EventProcessor class which simplifies the process of checkpointing today and will handle load balancing across partitions in upcoming previews.

Diving deeper into the guidelines: consistency

These Azure SDKs represent a cross-organizational effort to provide an ergonomic experience to every developer using every platform and as mentioned in the previous blog post, developer feedback helped define the following set of principles:

  • Idiomatic
  • Consistent
  • Approachable
  • Diagnosable
  • Compatible

Today we will deep dive into consistency.

Consistent

Feedback from developers and user studies have shown that APIs which are consistent are generally easier to learn and remember. In order to guide SDKs from Azure to be consistent, the guidelines contain the consistency principle:

  • Client libraries should be consistent within the language, consistent with the service and consistent between all target languages. In cases of conflict, consistency within the language is the highest priority and consistency between all target languages is the lowest priority.
  • Service-agnostic concepts such as logging, HTTP communication, and error handling should be consistent. The developer should not have to relearn service-agnostic concepts as they move between client libraries.
  • Consistency of terminology between the client library and the service is a good thing that aids in diagnosability.
  • All differences between the service and client library must have a good, articulated reason for existing, rooted in idiomatic usage.
  • The Azure SDK for each target language feels like a single product developed by a single team.
  • There should be feature parity across target languages. This is more important than feature parity with the service.

Let’s look closer at the second bullet point, “Service-agnostic concepts such as logging, HTTP communication, and error handling should be consistent.” Developers pointed out APIs that worked nicely on their own, but weren’t always perfectly consistent with each other. For example:

Blob storage used a skip/take style of paging, while returning a sync iterator as the result set:

let marker = undefined;
do {
   const listBlobsResponse = await containerURL.listBlobFlatSegment(
     Aborter.none,
     marker
   );

  marker = listBlobsResponse.nextMarker;
   for (const blob of listBlobsResponse.segment.blobItems) {
     console.log(`Blob: ${blob.name}`);
   }
} while (marker);

 

Cosmos used an async iterator to return results:

for await (const results of this.container.items.query(querySpec).getAsyncIterator()){
         console.log(results.result)
      }

 

Event Hubs used a ‘take’ style call that returned an array of results of a specified size:

const myEvents = await client.receiveBatch("my-partitionId", 10);

 

While using all three of these services together, developers indicated they had to work to remember more or refresh their memory by reviewing code samples.

The Consistency SDK Guideline

The JavaScript guidelines specify how to handle this situation in the section Modern and Idiomatic JavaScript:

☑️ YOU SHOULD use async functions for implementing asynchronous library APIs.

If you need to support ES5 and are concerned with library size, use async when combining asynchronous code with control flow constructs. Use promises for simpler code flows.  async adds code bloat (especially when targeting ES5) when transpiled.

☑️ DO use Iterators and Async Iterators for sequences and streams of all sorts.

Both iterators and async iterators are built into JavaScript and easy to consume. Other streaming interfaces (such as node streams) may be used where appropriate as long as they're idiomatic.

In a nutshell, it says when there is an asynchronous call that is a sequence (AKA list), Async Iterators are preferred.

In practice, this is how that principle is applied in the latest Azure SDK Libraries for Storage, Cosmos, and Event Hubs.

Storage, using an async iterator to list blobs:
for await (const blob of containerClient.listBlobsFlat()) {
   console.log(`Blob: ${blob.name}`);
}

 

Cosmos, still using async iterators to list items:
for await (const resources of resources.
                                 container.
                                 items.
                                 readAll({ maxItemCount: 20 }).
                                 getAsyncIterator()) {
     console.log(resources.doc.id)
}

 

Event Hubs – now using an async iterator to process events:
for await (const events of consumer.getEventIterator()){
     console.log(`${events}`)
   }

As you can see, a service-agnostic concept—in this case paging—has been standardized across all three services.

Feedback

If you have feedback on consistency or think you’ve found a bug after trying the August 2019 Preview (.Net, Java, JavaScript, Python), please file an Issue or pull request on GitHub (guidelines, .Net, Java, JavaScript, Python), or reach out to @AzureSDK on Twitter. We welcome contributions to these guidelines and libraries!

Announcing the preview of Azure Actions for GitHub

$
0
0

On Thursday, August 8, 2019, GitHub announced the preview of GitHub Actions with support for Continuous Integration and Continuous Delivery (CI/CD). Actions makes it possible to create simple, yet powerful pipelines and automate software compilation and delivery. Today, we are announcing the preview of Azure Actions for GitHub.

With these new Actions, developers can quickly build, test, and deploy code from GitHub repositories to the cloud with Azure.

You can find our first set of Actions grouped into four repositories on GitHub, each one containing documentation and examples to help you use GitHub for CI/CD and deploy your apps to Azure.

  • azure/actions (login): Authenticate with an Azure subscription.
  • azure/appservice-actions: Deploy apps to Azure App Services using the features Web Apps and Web Apps for Containers.
  • azure/container-actions: Connect to container registries, including Docker Hub and Azure Container Registry, as well as build and push container images.
  • azure/k8s-actions: Connect and deploy to a Kubernetes cluster, including Azure Kubernetes Service (AKS).

Connect to Azure

The login action (azure/actions) allows you to securely connect to an Azure subscription.

The process requires using a service principal, which can be generated using the Azure CLI, as per instructions. Use the GitHub Actions’ built-in secret store for safely storing the output of this command.

If your workflow involves containers, you can also use the azure/k8s-actions/docker-login and azure/container-actions/aks-set-context Actions for connecting to Azure services like Container Registry and AKS respectively.

These Actions help setting the context for the rest of the workflow. For example, once you have used azure/container-actions/docker-login, the next set of Actions in the workflow can perform tasks such as building, tagging, and pushing container images to Container Registry.

Deploy a web app

Azure App Service is a managed platform for deploying and scaling web applications. You can easily deploy your web app to Azure App Service with the azure/appservice-actions/webapp and azure/appservice-actions/webapp-container Actions.

The azure/appservice-actions/webapp action takes the app name and the path to an archive (*.zip, *.war, *.jar) or folder to deploy.

The azure/appservice-actions/webapp-container supports deploying containerized apps, including multi-container ones. When combined with azure/container-actions/docker-login, you can create a complete workflow which builds a container image, pushes it to Container Registry and then deploys it to Web Apps for Containers.

Deploy to Kubernetes

azure/k8s-actions/k8s-deploy helps you connect to a Kubernetes cluster, bake and deploy manifests, substitute artifacts, check rollout status, and handle secrets within AKS.

The azure/k8s-actions/k8s-create-secret action takes care of creating Kubernetes secret objects, which help you manage sensitive information such as passwords and API tokens. These notably include the Docker-registry secret, which is used by AKS itself to pull a private image from a registry. This action makes it possible to populate the Kubernetes cluster with values from the GitHub Actions’ built-in secret store.

Our container-centric Actions, including those for Kubernetes and for interacting with a Docker registry, aren’t specific to Azure, and can be used with any Kubernetes cluster, including self-hosted ones, running on-premises or on other clouds, as well as any Docker registry.

Azure Actions for GitHub

Full example

Here is an example of an end-to-end workflow which builds a container image, pushes it to Container Registry and then deploys to an AKS cluster by using manifest files.

on: [push]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@master
   
    - uses: azure/container-actions/docker-login@master
      with:
        login-server: contoso.azurecr.io
        username: ${{ secrets.REGISTRY_USERNAME }}
        password: ${{ secrets.REGISTRY_PASSWORD }}
   
    - run: |
        docker build . -t contoso.azurecr.io/k8sdemo:${{ github.sha }}
        docker push contoso.azurecr.io/k8sdemo:${{ github.sha }}
     
    # Set the target AKS cluster.
    - uses: azure/k8s-actions/aks-set-context@master
      with:
        creds: '${{ secrets.AZURE_CREDENTIALS }}'
        cluster-name: contoso
        resource-group: contoso-rg
       
    - uses: azure/k8s-actions/k8s-create-secret@master
      with:
        container-registry-url: contoso.azurecr.io
        container-registry-username: ${{ secrets.REGISTRY_USERNAME }}
        container-registry-password: ${{ secrets.REGISTRY_PASSWORD }}
        secret-name: demo-k8s-secret

    - uses: azure/k8s-actions/k8s-deploy@master
      with:
        manifests: |
          manifests/deployment.yml
          manifests/service.yml
        images: |
          demo.azurecr.io/k8sdemo:${{ github.sha }}
        imagepullsecrets: |
          demo-k8s-secret

More Azure Actions

Building on the momentum of GitHub Actions, today we are releasing this first Azure Actions in preview. In the next few months we will continue improving upon our available Actions, and we will release new ones to cover more Azure services.

Please try out the GitHub Actions for Azure and share your feedback via Twitter on @AzureDevOps, or using Developer Community. If you encounter a problem during the preview, please open an issue on the GitHub repository for the specific action.

New for developers: Azure Cosmos DB .NET SDK v3 now available

$
0
0

The Azure Cosmos DB team is announcing the general availability of version 3 of the Azure Cosmos DB .NET SDK, ​released in July. Thank you to all who gave feedback during our preview. 

In this post, we’ll walk through the latest improvements that we’ve made to enhance the developer experience in .NET SDK v3.

You can get the latest version of the SDK through NuGet and contribute on GitHub.

//Using .NET CLI
dotnet add package Microsoft.Azure.Cosmos

//Using NuGet
Install-Package Microsoft.Azure.Cosmos

What is Azure Cosmos DB?

Azure Cosmos DB is a globally distributed, multi-model database service that enables you to read and write data from any Azure region. It offers turnkey global distribution, guarantees single-digit millisecond latencies at the 99th percentile, 99.999 percent high availability, and elastic scaling of throughput and storage.

What is new in Azure Cosmos DB .NET SDK version 3?

Version 3 of the SDK contains numerous usability and performance improvements, including a new intuitive programming model, support for stream APIs, built-in support for change feed processor APIs, the ability to scale non-partitioned containers, and more. The SDK targets .NET Standard 2.0 and is open sourced on GitHub.

For new workloads, we recommend starting with the latest version 3.x SDK for the best experience. We have no immediate plans to retire version 2.x of the .NET SDK.

Targets .NET Standard 2.0

We’ve unified the existing Azure Cosmos DB .NET Framework and .NET Core SDKs into a single SDK, which targets .NET Standard 2.0. You can now use the .NET SDK in any platform that implements .NET Standard 2.0, including your .NET Framework 4.6.1+ and .NET Core 2.0+ applications.

Open source on GitHub

The Azure Cosmos DB .NET v3 SDK is open source, and our team is planning to do development in the open. To that end, we welcome any pull requests and will be logging issues and tracking feedback on GitHub.

New programming model with fluent API surface

Since the preview, we’ve continued to improve the object model for a more intuitive developer experience. We’ve created a new top level CosmosClient class to replace DocumentClient and split its methods into modular database and container classes. From our usability studies, we’ve seen that this hierarchy makes it easier for developers to learn and discover the API surface.

We’ve also added in fluent builder APIs, which make it easier to create CosmosClient, Container, and ChangeFeedProcessor classes with custom options.

View all samples on GitHub.

Stream APIs for high performance

The previous versions of the Azure Cosmos DB .NET SDKs always serialized and deserialized the data to and from the network. In the context of an ASP.NET Web API, this can lead to performance overhead. Now, with the new stream API, when you read an item or query, you can get the stream and pass it to the response without deserialization overhead, using the new GetItemQueryStreamIterator and ReadItemStreamAsync methods. To learn more, refer to the GitHub sample.

Easier to test and more extensible

In .NET SDK version 3, all APIs are mockable, making for easier unit testing.

We also introduced an extensible request pipeline, so you can pass in custom handlers that will run when sending requests to the service. For example, you can use these handlers to log request information in Azure Application Insights, define custom retry polices, and more. You can also now pass in a custom serializer, another commonly requested developer feature.

Use the Change Feed Processor APIs directly from the SDK

One of the most popular features of Azure Cosmos DB is the change feed, which is commonly used in event-sourcing architectures, stream processing, data movement scenarios, and to build materialized views. The change feed enables you to listen to changes on a container and get an incremental feed of its records as they are created or updated.

The new SDK has built-in support for the Change Feed Processor APIs, which means you can use the same SDK for building your application and change feed processor implementation. Previously, you had to use the separate change feed processor library.

To get started, refer to the documentation "Change feed processor in Azure Cosmos DB."

Ability to scale non-partitioned containers

We’ve heard from many customers who have non-partitioned or “fixed” containers that they wanted to scale them beyond their 10GB storage and 10,000 RU/s provisioned throughput limit. With version 3 of the SDK, you can now do so, without having to create a new container and move your data.

All non-partitioned containers now have a system partition key “_partitionKey” that you can set to a value when writing new items. Once you begin using the _partitionKey value, Azure Cosmos DB will scale your container as its storage volume increases beyond 10GB. If you want to keep your container as is, you can use the PartitionKey.None value to read and write existing data without a partition key.

Easier APIs for scaling throughput

We’ve redesigned the APIs for scaling provisioned throughput (RU/s) up and down. You can now use the ReadThroughputAsync method to get the current throughput and ReplaceThroughputAsync to change it. View sample.

Get started

To get started with the new Azure Cosmos DB .NET SDK version 3, add our new NuGet package to your project. To get started, follow the new tutorial and quickstart. We’d love to hear your feedback! You can log issues on our GitHub repository.

Stay up-to-date on the latest Azure #CosmosDB news and features by following us on Twitter @AzureCosmosDB. We can't wait to see what you will build with Azure Cosmos DB and the new .NET SDK!


Side-by-side Minor Version MSVC Toolsets in Visual Studio 2019

$
0
0

Visual Studio 2019 version 16.1 Preview 3 ships with the first side-by-side minor versions of the v142 MSVC toolset. We first shipped minor side-by-side versions of MSVC toolsets with Visual Studio 2017, but a few things have changed in 2019. This post covers what’s new; primarily more granular versions of the toolsets in the installer and support for CMake projects.

For those not familiar with these minor version MSVC toolsets, I recommend checking out our previous post about minor side-by-side toolsets. It includes many more details about when and how you should use this feature. In general, you should not need to use these minor versioned toolsets. Instead, the feature is intended as an “escape hatch” for developers who find that there is a bug, either in their source code or in MSVC, that cannot be easily worked around or fixed in a timely fashion. If there’s a conformance issue in your source code, the best option is to apply the proper fixes to make your code conforming if possible (sometimes there are too many required changes in your code to fix it all immediately). If you believe that there’s a bug in MSVC, it’s best to talk with us so that we can fix the bug or supply a workaround.

Installing Minor Version Toolsets

The biggest change between Visual Studio 2017 and 2019 is in how these minor toolsets are factored. We began breaking toolsets apart by architecture in Visual Studio 2019 and minor toolsets follow this practice too. Whereas in 2017 there was only one minor toolset to install, in 2019 you will need to select all the architectures you build your projects for.

You can install minor versions of the MSVC toolset through the Visual Studio installer. The “Desktop C++ Development” workload will always install the latest version of the toolset. However, if you navigate to the “individual components” tab in the installer and scroll to “Compilers, build tools, and runtimes” you will see all of the available MSVC toolsets, including the minor versions.

For example, you may see both “(v14.20)” and “(v.14.21)” versions of the “MSVC v142” toolset. In the latest release preview release you will also see “(v14.22)” and “(v14.23)” the latest.

Minor versions of the MSVC toolsets in the Visual Studio installer.

Using Minor Version Toolsets

If you are using C++ MSBuild projects, using a minor version of the MSVC toolset has not changed.

Side-by-side minor version MSVC toolsets don’t appear in the “Platform Toolset” options of the Project Configuration Properties. To enable them you need to edit the .vcxproj file for your project. Each side-by-side minor version MSVC toolset includes a .props file that can be included in your project’s .vcxproj file.

Before you start, you should add the -Bv compiler option as an Additional Option on the compiler command line. This will show the verbose compiler version information in the build Output box. Just enter “-Bv” in the Project Properties > C/C++ > Command Line edit box.

Enable “-Bv” compiler option with project property pages “C/C++ > Command Line.”

Now, open the “VCAuxiliaryBuild14.20” directory in the folder where you installed Visual Studio version 16.1 Preview 3. For example, using the default install location you’ll find it here: “C:Program Files (x86)Microsoft Visual StudioPreviewEnterpriseVCAuxiliaryBuild14.20.” You should see three files in this folder. You’ll need to copy one of them, “Microsoft.VCToolsVersion.14.20.props,” into your solution directory.

Copy the toolsets “.props” file.

Next, open the folder containing your solution by right-clicking on the solution and selecting “Open Folder in File Explorer”.

Open the project directory.

Copy the “Microsoft.VCToolsVersion.14.20.props” into your solution directory. The file should sit in the same directory as your project’s solution file, e.g., Project6.sln.

Copy the toolset’s “.props” file to the project directory.

Now unload your project by right-clicking on the project and selecting “Unload Project”. Once the project is unloaded, you can edit the project by clicking on it and selecting “Edit [ProjectName]”. Locate the line that says:

<Import Project="$(VCTargetsPath)Microsoft.Cpp.Default.props" />

Add a line directly above this line that imports the Microsoft.VCToolsVersion.14.20.props that you just copied into the solution directory:

<Import Project="$(SolutionDir)Microsoft.VCToolsVersion.14.20.props" />
<Import Project="$(VCTargetsPath)Microsoft.Cpp.Default.props" />

Now, save the file, then right-click on the project name and select “Reload Project”. If you haven’t already saved the file, you’ll be prompted to close the open .vcxproj file. Select “Yes” to close the file. Now when you rebuild the solution, you’ll see that you’re using the 14.20 MSVC compiler toolset.

Using Minor Version Toolsets with CMake

You can also seamlessly use these toolsets with CMake projects in Visual Studio too. To do this, you will need to create an environment variable in your CMakeSettings file called “VCToolsVersion” and set it to the minor version you need to use, for example “14.20.” To do this, you will need to modify the JSON of the CMake Settings file directly. If you are in in the CMake Settings editor, you can access the JSON with the “Edit JSON” button in the top right corner.

Now, you just need to delete the cache “Project > CMake Cache > Delete Cache” and regenerate it “Project > Generate Cache.” You can confirm that CMake is using the right toolset by examining the compiler path in the Output Window.

Toolset 14.20 being used instead of the latest version 14.23 with the “VCToolsVersion” variable.

If you are working with CMake projects outside of the IDE, you can still use minor versions of the MSVC toolsets. Follow the instructions below for setting up a command line to use the toolset you need, and then just run CMake to configure the project. Once the project is configured, the minor toolset selected will be saved in the CMake cache and can then be used from any environment.

Using Minor Version Toolsets with the Command Line

If you need to use a side-by-side minor version MSVC toolset from the command line you just need to customize a developer command prompt. The command prompts installed with Visual Studio 2019 version 16.1 Preview 3 are located in the VCAuxiliaryBuild subdirectory of your VS install dir. For example, with the default installation path, they are located in the C:Program Files (x86)Microsoft Visual StudioPreviewEnterpriseVCAuxiliaryBuild directory.

In that folder you’ll find four developer command prompts (named vcvars*.bat). Pick any one and create a copy to edit. The contents of these files are pretty simple: they all just invoke vcvarsall.bat with the proper architecture parameter. We’ll do the same, but add a new parameter that tells vcvarsall.bat to set up the environment for the v14.20 toolset: -vcvars_ver=14.20.

You can run “cl -Bv” to show that the environment is set up for the right version of the tools.

We’d love for you to download Visual Studio 2019 and give it a try. As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter problems with Visual Studio or MSVC, or have a suggestion for us, please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer Community. You can also find us on Twitter (@VisualC).

The post Side-by-side Minor Version MSVC Toolsets in Visual Studio 2019 appeared first on C++ Team Blog.

Kristiansund collaborates with Microsoft Teams to better serve its residents

August patches for Azure DevOps Server and Team Foundation Server

$
0
0

For the August release, we are releasing patches for Azure DevOps Server 2019.0.1, TFS 2018 Update 3.2, and TFS 2017 Update 3.1. This month, there are no security fixes; these patches include functional changes.

Azure DevOps Server 2019.0.1: We added information to service connections to clarify that they are authorized for all pipelines by default.

TFS 2018 Update 3.2 and TFS 2017 Update 3.1: Fixed a bug where the Work Item Tracking Warehouse Sync stops syncing with an error: “TF221122: An error occurred running job Work Item Tracking Warehouse Sync for team project collection or Team Foundation server ATE. —> System.Data.SqlClient.SqlException: Cannot create compensating record. Missing historic data.”

Azure DevOps Server 2019.0.1 Patch 2

If you have Azure DevOps Server 2019, you should first update to Azure DevOps Server 2019.0.1. Once on 2019.0.1, install Azure DevOps Server 2019.0.1 Patch 2.

Verifying Installation

To verify if you have this update installed, you can check the version of the following file: [INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.Framework.Server.dll. Azure DevOps Server 2019 is installed to c:Program FilesAzure DevOps Server 2019 by default.

After installing Azure DevOps Server 2019.0.1 Patch 2, the version will be 17.143.29207.6.

TFS 2018 Update 3.2 Patch 6

If you have TFS 2018 and you are seeing the TF221122 error in you Work Item Tracking Warehouse Sync job, you should first update to TFS 2018 Update 3.2. Once on Update 3.2, install TFS 2018 Update 3.2 Patch 6.

Verifying Installation

To verify if you have this update installed, you can check the version of the following file: [TFS_INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.WorkItemTracking.Web.dll. TFS 2018 is installed to c:Program FilesMicrosoft Team Foundation Server 2018 by default.

After installing TFS 2018 Update 3.2 Patch 6, the version will be 16.131.29131.4.

TFS 2017 Update 3.1 Patch 7

If you have TFS 2017 and you are seeing the TF221122 error in you Work Item Tracking Warehouse Sync job, you should first update to TFS 2017 Update 3.1. Once on Update 3.1, install TFS 2017 Update 3.1 Patch 7.

Verifying Installation

To verify if you have a patch installed, you can check the version of the following file: [TFS_INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.Server.WebAccess.Admin.dll. TFS 2017 is installed to c:Program FilesMicrosoft Team Foundation Server 15.0 by default.

After installing TFS 2017 Update 3.1 Patch 7, the version will be 15.117.29131.0.

The post August patches for Azure DevOps Server and Team Foundation Server appeared first on Azure DevOps Blog.

New C++ Core Check Rules

$
0
0

The C++ Core Guidelines Checker receives three new rules with the release of Visual Studio version 16.3 Preview 2. In addition, some warnings published in the warnings.h that ships with Visual Studio have been moved or renamed.

Below is a quick summary of these additions. For more detailed information, please see the C++ Core Guidelines Checker Reference documentation.

If you’re just getting started with native code analysis tools, take a look at our introductory quick start for Code Analysis for C/C++.

New rule set

The “Enum Rules” set has been added in this release. It can be enabled by selecting “C++ Core Check Enum Rules” in the Project Settings dialog. This rule set can be used to detect common errors when using enums as specified in the Core Guidelines Enum section.

New rules

Const rules

  • C26814 – “USE_CONSTEXPR_RATHER_THAN_CONST”C26814 is a more aggressive implementation of Con.5. Our previous warning, C26498 (“USE_CONSTEXPR_FOR_FUNCTIONCALL”), checks for constexpr conversion candidates by evaluating all const variables whose values are derived from constexpr functions. This new rule evaluates all const variables to determine whether their values can be determined at compile time.NOTE: This rule is not included in the “Microsoft Native Recommended Rules” rule set by default and will need to be added or run via the “C++ Core Check Const Rules” rule set.

Enum rules

  • C26812 – “USE_ENUM_CLASS_INSTEAD_OF_ENUM”C26812 implements Enum.3. It recommends declaring all enums as a scoped enums; that is, declaring “enum” as “enum class”. This is largely to prevent unintended errors when using enums as they are too readily converted to int.

Type rules

  • C26478 – “NO_MOVE_OP_ON_CONST”C26478 is designed to prevent unnecessary calls to “std::move”. Specifically, this rule hopes to curb the usage of “std::move” on constant objects. When calling “std::move” on a const object, the move operation performs a copy rather than moving the ownership of the object, which is likely not what the developer intended. For further reading, refer to ES.56.

Warnings.h changes

  • The warning C26477 “USE_NULLPTR” was renamed to “USE_NULLPTR_NOT_CONSTANT”.
  • The rule category “CPPCORECHECK_EXPERIMENTAL_WARNINGS” was removed from this release. The warning it contained, C26800 (“USE_OF_A_MOVED_FROM_OBJECT”), was added to the “CPPCORECHECK_LIFETIME_WARNINGS” rules.
  • The warnings C26810 and C26811, (“COROUTINES_USE_AFTER_FREE_CAPTURE” and “COROUTINES_USE_AFTER_FREE_PARAM” respectively, were removed from the “CPPCORECHECK_CONCURRENCY_WARNINGS” category and added to “CPPCORECHECK_LIFETIME_WARNINGS”.

Feedback

We would appreciate it if you tried these new rules out and gave us feedback on them. We can be reached via the comments below, via email (visualcpp@microsoft.com), or on Twitter @VisualC. If you encounter any problems, please report them via the Report A Problem tool in Visual Studio or on the Visual Studio Developer Community.

 

The post New C++ Core Check Rules appeared first on C++ Team Blog.

Your single source for Azure best practices

$
0
0

Optimizing your Azure workloads can feel like a time-consuming task. With so many services that are constantly evolving it’s challenging to stay on top of, let alone implement, the latest best practices and ensure you’re operating in a cost-efficient manner that delivers security, performance, and reliability.

Many Azure services offer best practices and advice. Examples include Azure Security Center, Azure Cost Management, and Azure SQL Database. But what if you want a single source for Azure best practices, a central location where you can see and act on every optimization recommendation available to you? That’s why we created Microsoft Azure Advisor, a service that helps you optimize your resources for high availability, security, performance, and cost, pulling in recommendations from across Azure and supplementing them with best practices of its own.

In this blog, we’ll explore how you can use Advisor as your single destination for resource optimization and start getting more out of Azure.

What is Azure Advisor and how does it work?

Advisor is your personalized guide to Azure best practices. It analyzes your usage and configurations and offers recommendations to help you optimize your Azure resources for high availability, security, performance, and cost. Each of Advisor’s recommendations includes suggested actions and sharing features to help you quickly and easily remediate your recommendations and optimize your deployments. You can also configure Advisor to only show recommendations for the subscriptions and resource groups that mean the most to you, so you can focus on critical fixes. Advisor is available from the Azure portal, command line, and via REST API, depending on your needs and preferences.

An image showing the Azure Advisor overview page.

Ultimately, Advisor’s goal is to save you time while helping you get the most out of Azure. That’s why we’re making Advisor a single, central location for optimization that pulls in best practices from companion services like Azure Security Center.

How Azure Security Center integrates with Advisor

Our most recent integration with Advisor is Azure Security Center. Security Center helps you gain unmatched hybrid security management and threat protection. Microsoft uses a wide variety of physical, infrastructure, and operational controls to help secure Azure—but there are additional actions you need to take to help safeguard your workloads. Security Center can help you quickly strengthen your security posture and protect against threats.

Advisor has a new, streamlined experience for reviewing and remediating your security recommendations thanks to a tighter integration with Azure Security Center. As part of the enhanced integration, you’ll be able to:

  • See a detailed view of your security recommendations from Security Center directly in Advisor.
  • Get your security recommendations programmatically through the Advisor REST API, CLI, or PowerShell.
  • Review a summary of your security alerts from Security Center in Advisor.

An image showing the Azure Advisor security page.

The new Security Center experience in Advisor will help you more quickly and easily remediate security recommendations.

How Azure Cost Management integrates with Advisor

Another Azure service that provides best practice recommendations is Azure Cost Management, which helps you optimize cloud costs while maximizing your cloud potential. With Cost Management, you can monitor your spending, increase your organizational accountability, and boost your cloud efficiency.

An image showing the Azure Advisor cost page.

Advisor and Cost Management are also tightly integrated. Cost Management’s integration with Advisor means that you can see any cost recommendation in either service and act to optimize your cloud costs by taking advantage of reservations, rightsizing, or removing idle resources.

Again, this will help you streamline your optimizations.

Azure SQL DB Advisor, Azure App Service Advisor, and more

There’s no shortage of advice in Azure. Many other services including Azure SQL Database and Azure App Service include Advisor-like tools designed to help you follow best practices for those services and succeed in the cloud.

Advisor pulls in and displays recommendations from these services, so the choice is yours. You can review the optimizations in context—in a given instance of an Azure SQL database, for example—or in a single, centralized location in Advisor.

An image showing the SQL databases performance recommendations page.

We often recommend the Advisor approach. This way, you can see all your optimizations in a broader, more holistic context and remediate with the big picture in mind, without worrying that you’re missing anything. Plus, it’ll save you time switching between different resources.

Review your recommendations in one place with Advisor

Our recommendation? Use Advisor as your core resource optimization tool. You’ll find everything in a single location rather than having to visit different, more specialized locations. With the Advisor API, you can even integrate with your organization’s internal systems—like a ticketing application or dashboard—to get everything in one place on your end and plug into your own optimization workflows.

Visit Advisor in the Azure portal to get started reviewing, sharing, and remediating your recommendations. For more in-depth guidance, visit the Azure Advisor documentation. Let us know if you have a suggestion for Advisor by submitting an idea here in the Advisor forums.

Windows 10 SDK Preview Build 18956 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18956 or greater). The Preview SDK Build 18956 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the Microsoft Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
  • This build of the Windows SDK will install on only on Windows 10 Insider Preview builds.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following static URL: https://software-download.microsoft.com/download/sg/Windows_InsiderPreview_SDK_en-us_18956_1.iso.

Tools Updates

Message Compiler (mc.exe)

  • Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. Otherwise, if it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. If the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
  • Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).

Windows Trace Preprocessor (tracewpp.exe)

  • Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
  • Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
  • Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.

TraceLoggingProvider.h

  • Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
  • In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.

Signing your apps with Device Guard Signing

Breaking Changes

Removal of IRPROPS.LIB

In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.

API Updates, Additions and Removals

The following APIs have been added to the platform since the release of Windows 10 SDK, version 1903, build 18362.

Additions:

 

namespace Windows.AI.MachineLearning {
  public sealed class LearningModelSessionOptions {
    bool CloseModelOnSessionCreation { get; set; }
  }
}
namespace Windows.ApplicationModel {
  public sealed class Package {
    bool IsStub { get; }
  }
}
namespace Windows.ApplicationModel.DataTransfer {
  public sealed class DataPackage {
    event TypedEventHandler<DataPackage, object> ShareCanceled;
  }
}
namespace Windows.Devices.Input {
  public sealed class PenButtonListener
  public sealed class PenDockedEventArgs
  public sealed class PenDockListener
  public sealed class PenTailButtonClickedEventArgs
  public sealed class PenTailButtonDoubleClickedEventArgs
  public sealed class PenTailButtonLongPressedEventArgs
  public sealed class PenUndockedEventArgs
}
namespace Windows.Devices.Sensors {
  public sealed class Accelerometer {
    AccelerometerDataThreshold ReportThreshold { get; }
  }
  public sealed class AccelerometerDataThreshold
  public sealed class Barometer {
    BarometerDataThreshold ReportThreshold { get; }
  }
  public sealed class BarometerDataThreshold
  public sealed class Compass {
    CompassDataThreshold ReportThreshold { get; }
  }
  public sealed class CompassDataThreshold
  public sealed class Gyrometer {
    GyrometerDataThreshold ReportThreshold { get; }
  }
  public sealed class GyrometerDataThreshold
  public sealed class Inclinometer {
    InclinometerDataThreshold ReportThreshold { get; }
  }
  public sealed class InclinometerDataThreshold
  public sealed class LightSensor {
    LightSensorDataThreshold ReportThreshold { get; }
  }
  public sealed class LightSensorDataThreshold
  public sealed class Magnetometer {
    MagnetometerDataThreshold ReportThreshold { get; }
  }
  public sealed class MagnetometerDataThreshold
}
namespace Windows.Foundation.Metadata {
  public sealed class AttributeNameAttribute : Attribute
  public sealed class FastAbiAttribute : Attribute
  public sealed class NoExceptionAttribute : Attribute
}
namespace Windows.Globalization {
  public sealed class Language {
    string AbbreviatedName { get; }
    public static IVector<string> GetMuiCompatibleLanguageListFromLanguageTags(IIterable<string> languageTags);
  }
}
namespace Windows.Graphics.Capture {
  public sealed class GraphicsCaptureSession : IClosable {
    bool IsCursorCaptureEnabled { get; set; }
  }
}
namespace Windows.Graphics.DirectX {
  public enum DirectXPixelFormat {
    SamplerFeedbackMinMipOpaque = 189,
    SamplerFeedbackMipRegionUsedOpaque = 190,
  }
}
namespace Windows.Management.Deployment {
  public sealed class AddPackageOptions
  public enum DeploymentOptions : uint {
    StageInPlace = (uint)4194304,
  }
  public enum InstallStubOption
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> AddPackageByUriAsync(Uri packageUri, AddPackageOptions options);
    IIterable<Package> FindProvisionedPackages();
    bool GetPackageStubPreference(string packageFamilyName);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackageByUriAsync(Uri manifestUri, RegisterPackageOptions options);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackagesByFullNameAsync(IIterable<string> packageFullNames, DeploymentOptions deploymentOptions);
    void SetPackageStubPreference(string packageFamilyName, bool useStub);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> StagePackageByUriAsync(Uri packageUri, StagePackageOptions options);
  }
  public enum PackageTypes : uint {
    All = (uint)4294967295,
  }
  public sealed class RegisterPackageOptions
  public enum RemovalOptions : uint {
    PreserveRoamableApplicationData = (uint)128,
  }
  public sealed class StagePackageOptions
}
namespace Windows.Media.Audio {
  public sealed class AudioPlaybackConnection : IClosable
  public sealed class AudioPlaybackConnectionOpenResult
  public enum AudioPlaybackConnectionOpenResultStatus
  public enum AudioPlaybackConnectionState
}
namespace Windows.Media.Capture {
  public sealed class MediaCapture : IClosable {
    MediaCaptureRelativePanelWatcher CreateRelativePanelWatcher(StreamingCaptureMode captureMode, DisplayRegion displayRegion);
  }
  public sealed class MediaCaptureInitializationSettings {
    Uri DeviceUri { get; set; }
    PasswordCredential DeviceUriPasswordCredential { get; set; }
  }
  public sealed class MediaCaptureRelativePanelWatcher : IClosable
}
namespace Windows.Media.Capture.Frames {
  public sealed class MediaFrameSourceInfo {
    Panel GetRelativePanel(DisplayRegion displayRegion);
  }
}
namespace Windows.Media.Devices {
  public sealed class PanelBasedOptimizationControl
}
namespace Windows.Media.MediaProperties {
  public static class MediaEncodingSubtypes {
    public static string Pgs { get; }
    public static string Srt { get; }
    public static string Ssa { get; }
    public static string VobSub { get; }
  }
  public sealed class TimedMetadataEncodingProperties : IMediaEncodingProperties {
    public static TimedMetadataEncodingProperties CreatePgs();
    public static TimedMetadataEncodingProperties CreateSrt();
    public static TimedMetadataEncodingProperties CreateSsa(byte[] formatUserData);
    public static TimedMetadataEncodingProperties CreateVobSub(byte[] formatUserData);
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
  public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
}
namespace Windows.Networking.NetworkOperators {
  public interface INetworkOperatorTetheringAccessPointConfiguration2
  public interface INetworkOperatorTetheringManagerStatics4
  public sealed class NetworkOperatorTetheringAccessPointConfiguration : INetworkOperatorTetheringAccessPointConfiguration2 {
    TetheringWiFiBand Band { get; set; }
    bool IsBandSupported(TetheringWiFiBand band);
    IAsyncOperation<bool> IsBandSupportedAsync(TetheringWiFiBand band);
  }
  public sealed class NetworkOperatorTetheringManager {
    public static void DisableTimeout(TetheringTimeoutKind timeoutKind);
    public static IAsyncAction DisableTimeoutAsync(TetheringTimeoutKind timeoutKind);
    public static void EnableTimeout(TetheringTimeoutKind timeoutKind);
    public static IAsyncAction EnableTimeoutAsync(TetheringTimeoutKind timeoutKind);
    public static bool IsTimeoutEnabled(TetheringTimeoutKind timeoutKind);
    public static IAsyncOperation<bool> IsTimeoutEnabledAsync(TetheringTimeoutKind timeoutKind);
  }
  public enum TetheringTimeoutKind
  public enum TetheringWiFiBand
}
namespace Windows.Security.Authentication.Web.Core {
  public sealed class WebAccountMonitor {
    event TypedEventHandler<WebAccountMonitor, WebAccountEventArgs> AccountPictureUpdated;
  }
}
namespace Windows.Storage {
  public sealed class StorageFile : IInputStreamReference, IRandomAccessStreamReference, IStorageFile, IStorageFile2, IStorageFilePropertiesWithAvailability, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFile> GetFileFromPathForUserAsync(User user, string path);
  }
  public sealed class StorageFolder : IStorageFolder, IStorageFolder2, IStorageFolderQueryOperations, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFolder> GetFolderFromPathForUserAsync(User user, string path);
  }
}
namespace Windows.Storage.Provider {
  public static class StorageProviderSyncRootManager {
    public static bool IsSupported();
  }
}
namespace Windows.System {
  public sealed class UserChangedEventArgs {
    IVectorView<UserWatcherUpdateKind> ChangedPropertyKinds { get; }
  }
  public enum UserWatcherUpdateKind
}
namespace Windows.UI.Composition.Interactions {
  public sealed class InteractionTracker : CompositionObject {
    int TryUpdatePosition(Vector3 value, InteractionTrackerClampingOption option, InteractionTrackerPositionUpdateOption posUpdateOption);
  }
  public enum InteractionTrackerPositionUpdateOption
}
namespace Windows.UI.Composition.Particles {
  public sealed class ParticleAttractor : CompositionObject
  public sealed class ParticleAttractorCollection : CompositionObject, IIterable<ParticleAttractor>, IVector<ParticleAttractor>
  public class ParticleBaseBehavior : CompositionObject
  public sealed class ParticleBehaviors : CompositionObject
  public sealed class ParticleColorBehavior : ParticleBaseBehavior
  public struct ParticleColorBinding
  public sealed class ParticleColorBindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleColorBinding>>, IMap<float, ParticleColorBinding>
  public enum ParticleEmitFrom
  public sealed class ParticleEmitterVisual : ContainerVisual
  public sealed class ParticleGenerator : CompositionObject
  public enum ParticleInputSource
  public enum ParticleReferenceFrame
  public sealed class ParticleScalarBehavior : ParticleBaseBehavior
  public struct ParticleScalarBinding
  public sealed class ParticleScalarBindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleScalarBinding>>, IMap<float, ParticleScalarBinding>
  public enum ParticleSortMode
  public sealed class ParticleVector2Behavior : ParticleBaseBehavior
  public struct ParticleVector2Binding
  public sealed class ParticleVector2BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector2Binding>>, IMap<float, ParticleVector2Binding>
  public sealed class ParticleVector3Behavior : ParticleBaseBehavior
  public struct ParticleVector3Binding
  public sealed class ParticleVector3BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector3Binding>>, IMap<float, ParticleVector3Binding>
  public sealed class ParticleVector4Behavior : ParticleBaseBehavior
  public struct ParticleVector4Binding
  public sealed class ParticleVector4BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector4Binding>>, IMap<float, ParticleVector4Binding>
}
namespace Windows.UI.Input {
  public sealed class CrossSlidingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class DraggingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class GestureRecognizer {
    uint HoldMaxContactCount { get; set; }
    uint HoldMinContactCount { get; set; }
    float HoldRadius { get; set; }
    TimeSpan HoldStartDelay { get; set; }
    uint TapMaxContactCount { get; set; }
    uint TapMinContactCount { get; set; }
    uint TranslationMaxContactCount { get; set; }
    uint TranslationMinContactCount { get; set; }
  }
  public sealed class HoldingEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationCompletedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationInertiaStartingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationStartedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationUpdatedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class RightTappedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class SystemButtonEventController : AttachableInputObject
  public sealed class SystemFunctionButtonEventArgs
  public sealed class SystemFunctionLockChangedEventArgs
  public sealed class SystemFunctionLockIndicatorChangedEventArgs
  public sealed class TappedEventArgs {
    uint ContactCount { get; }
  }
}
namespace Windows.UI.Input.Inking {
  public sealed class InkModelerAttributes {
    bool UseVelocityBasedPressure { get; set; }
  }
}
namespace Windows.UI.Text.Core {
  public sealed class CoreTextServicesManager {
    public static TextCompositionKind TextCompositionKind { get; }
  }
  public enum TextCompositionKind
}
namespace Windows.UI.ViewManagement {
  public sealed class ApplicationView {
    bool CriticalInputMismatch { get; set; }
    ScreenCaptureDisabledBehavior ScreenCaptureDisabledBehavior { get; set; }
    bool TemporaryInputMismatch { get; set; }
    void ApplyApplicationUserModelID(string value);
  }
  public enum ApplicationViewMode {
    Spanning = 2,
  }
  public enum ScreenCaptureDisabledBehavior
  public sealed class UISettings {
    event TypedEventHandler<UISettings, UISettingsAnimationsEnabledChangedEventArgs> AnimationsEnabledChanged;
    event TypedEventHandler<UISettings, UISettingsMessageDurationChangedEventArgs> MessageDurationChanged;
  }
  public sealed class UISettingsAnimationsEnabledChangedEventArgs
  public sealed class UISettingsMessageDurationChangedEventArgs
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    event TypedEventHandler<CoreInputView, CoreInputViewHidingEventArgs> PrimaryViewHiding;
    event TypedEventHandler<CoreInputView, CoreInputViewShowingEventArgs> PrimaryViewShowing;
  }
  public sealed class CoreInputViewHidingEventArgs
  public enum CoreInputViewKind {
    Symbols = 4,
  }
  public sealed class CoreInputViewShowingEventArgs
  public sealed class UISettingsController
}
namespace Windows.UI.WindowManagement {
  public sealed class AppWindow {
    void SetPreferredTopMost();
    void SetRelativeZOrderBeneath(AppWindow appWindow);
  }
  public sealed class AppWindowChangedEventArgs {
    bool DidOffsetChange { get; }
  }
  public enum AppWindowPresentationKind {
    Snapped = 5,
    Spanning = 4,
  }
  public sealed class SnappedPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class SpanningPresentationConfiguration : AppWindowPresentationConfiguration
}
namespace Windows.UI.Xaml.Controls {
  public class HandwritingView : Control {
    UIElement HostUIElement { get; set; }
    public static DependencyProperty HostUIElementProperty { get; }
    CoreInputDeviceTypes InputDeviceTypes { get; set; }
    bool IsSwitchToKeyboardButtonVisible { get; set; }
    public static DependencyProperty IsSwitchToKeyboardButtonVisibleProperty { get; }
    double MinimumColorDifference { get; set; }
    public static DependencyProperty MinimumColorDifferenceProperty { get; }
    bool PreventAutomaticDismissal { get; set; }
    public static DependencyProperty PreventAutomaticDismissalProperty { get; }
    bool ShouldInjectEnterKey { get; set; }
    public static DependencyProperty ShouldInjectEnterKeyProperty { get; }
    event TypedEventHandler<HandwritingView, HandwritingViewCandidatesChangedEventArgs> CandidatesChanged;
    event TypedEventHandler<HandwritingView, HandwritingViewContentSizeChangingEventArgs> ContentSizeChanging;
    void SelectCandidate(uint index);
    void SetTrayDisplayMode(HandwritingViewTrayDisplayMode displayMode);
  }
  public sealed class HandwritingViewCandidatesChangedEventArgs
  public sealed class HandwritingViewContentSizeChangingEventArgs
  public enum HandwritingViewTrayDisplayMode
}
namespace Windows.UI.Xaml.Core.Direct {
  public enum XamlEventIndex {
    HandwritingView_ContentSizeChanging = 321,
  }
  public enum XamlPropertyIndex {
    HandwritingView_HostUIElement = 2395,
    HandwritingView_IsSwitchToKeyboardButtonVisible = 2393,
    HandwritingView_MinimumColorDifference = 2396,
    HandwritingView_PreventAutomaticDismissal = 2397,
    HandwritingView_ShouldInjectEnterKey = 2398,
  }
}

The post Windows 10 SDK Preview Build 18956 available now! appeared first on Windows Developer Blog.

ASP.NET Core and Blazor updates in .NET Core 3.0 Preview 8

$
0
0

.NET Core 3.0 Preview 8 is now available and it includes a bunch of new updates to ASP.NET Core and Blazor.

Here’s the list of what’s new in this preview:

  • Project template updates
    • Cleaned up top-level templates in Visual Studio
    • Angular template updated to Angular 8
    • Blazor templates renamed and simplified
    • Razor Class Library template replaces the Blazor Class Library template
  • Case-sensitive component binding
  • Improved reconnection logic for Blazor Server apps
  • NavLink component updated to handle additional attributes
  • Culture aware data binding
  • Automatic generation of backing fields for @ref
  • Razor Pages support for @attribute
  • New networking primitives for non-HTTP Servers
  • Unix domain socket support for the Kestrel Sockets transport
  • gRPC support for CallCredentials
  • ServiceReference tooling in Visual Studio
  • Diagnostics improvements for gRPC

Please see the release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.0 Preview 8 install the .NET Core 3.0 Preview 8 SDK

If you’re on Windows using Visual Studio, install the latest preview of Visual Studio 2019.

.NET Core 3.0 Preview 8 requires Visual Studio 2019 16.3 Preview 2 or later

To install the latest Blazor WebAssembly template also run the following command:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview8.19405.7

Upgrade an existing project

To upgrade an existing an ASP.NET Core app to .NET Core 3.0 Preview 8, follow the migrations steps in the ASP.NET Core docs.

Please also see the full list of breaking changes in ASP.NET Core 3.0.

To upgrade an existing ASP.NET Core 3.0 Preview 7 project to Preview 8:

  • Update Microsoft.AspNetCore.* package references to 3.0.0-preview8.19405.7.
  • In Razor components rename OnInit to OnInitialized and OnInitAsync to OnInitializedAsync.
  • In Blazor apps, update Razor component parameters to be public, as non-public component parameters now result in an error.
  • In Blazor WebAssembly apps that make use of the HttpClient JSON helpers, add a package reference to Microsoft.AspNetCore.Blazor.HttpClient.
  • On Blazor form components remove use of Id and Class parameters and instead use the HTML id and class attributes.
  • Rename ElementRef to ElementReference.
  • Remove backing field declarations when using @ref or specify the @ref:suppressField parameter to suppress automatic backing field generation.
  • Update calls to ComponentBase.Invoke to call ComponentBase.InvokeAsync.
  • Update uses of ParameterCollection to use ParameterView.
  • Update uses of IComponent.Configure to use IComponent.Attach.
  • Remove use of namespace Microsoft.AspNetCore.Components.Layouts.

You should hopefully now be all set to use .NET Core 3.0 Preview 8.

Project template updates

Cleaned up top-level templates in Visual Studio

Top level ASP.NET Core project templates in the “Create a new project” dialog in Visual Studio no longer appear duplicated in the “Create a new ASP.NET Core web application” dialog. The following ASP.NET Core templates now only appear in the “Create a new project” dialog:

  • Razor Class Library
  • Blazor App
  • Worker Service
  • gRPC Service

Angular template updated to Angular 8

The Angular template for ASP.NET Core 3.0 has now been updated to use Angular 8.

Blazor templates renamed and simplified

We’ve updated the Blazor templates to use a consistent naming style and to simplify the number of templates:

  • The “Blazor (server-side)” template is now called “Blazor Server App”. Use blazorserver to create a Blazor Server app from the command-line.
  • The “Blazor” template is now called “Blazor WebAssembly App”. Use blazorwasm to create a Blazor WebAssembly app from the command-line.
  • To create an ASP.NET Core hosted Blazor WebAssembly app, select the “ASP.NET Core hosted” option in Visual Studio, or pass the --hosted on the command-line

Create a new Blazor app

dotnet new blazorwasm --hosted

Razor Class Library template replaces the Blazor Class Library template

The Razor Class Library template is now setup for Razor component development by default and the Blazor Class Library template has been removed. New Razor Class Library projects target .NET Standard so they can be used from both Blazor Server and Blazor WebAssembly apps. To create a new Razor Class Library template that targets .NET Core and supports Pages and Views instead, select the “Support pages and views” option in Visual Studio, or pass the --support-pages-and-views option on the command-line.

Razor Class Library for Pages and Views

dotnet new razorclasslib --support-pages-and-views

Case-sensitive component binding

Components in .razor files are now case-sensitive. This enables some useful new scenarios and improves diagnostics from the Razor compiler.

For example, the Counter has a button for incrementing the count that is styled as a primary button. What if we wanted a Button component that is styled as a primary button by default? Creating a component named Button in previous Blazor releases was problematic because it clashed with the button HTML element, but now that component matching is case-sensitive we can create our Button component and use it in Counter without issue.

Button.razor

<button class="btn btn-primary" @attributes="AdditionalAttributes" @onclick="OnClick">@ChildContent</button>

@code {
    [Parameter]
    public EventCallback<UIMouseEventArgs> OnClick { get; set; }

    [Parameter]
    public RenderFragment ChildContent { get; set; }

    [Parameter(CaptureUnmatchedValues = true)]
    public IDictionary<string, object> AdditionalAttributes { get; set; }
}

Counter.razor

@page "/counter"

<h1>Counter</h1>

<p>Current count: @currentCount</p>

<Button OnClick="IncrementCount">Click me</Button>

@code {
    int currentCount = 0;

    void IncrementCount()
    {
        currentCount++;
    }
}

Notice that the Button component is pascal cased, which is the typical style for .NET types. If we instead try to name our component button we get a warning that components cannot start with a lowercase letter due to the potential conflicts with HTML elements.

Lowercase component warning

We can move the Button component into a Razor Class Library so that it can be reused in other projects. We can then reference the Razor Class Library from our web app. The Button component will now have the default namespace of the Razor Class Library. The Razor compiler will resolve components based on the in scope namespaces. If we try to use our Button component without adding a using statement for the requisite namespace, we now get a useful error message at build time.

Unrecognized component error

NavLink component updated to handle additional attributes

The built-in NavLink component now supports passing through additional attributes to the rendered anchor tag. Previously NavLink had specific support for the href and class attributes, but now you can specify any additional attribute you’d like. For example, you can specify the anchor target like this:

<NavLink href="my-page" target="_blank">My page</NavLink>

which would render:

<a href="my-page" target="_blank" rel="noopener noreferrer">My page</a>

Improved reconnection logic for Blazor Server apps

Blazor Server apps require a live connection to the server in order to function. If the connection or the server-side state associated with it is lost, then the the client will be unable to function. Blazor Server apps will attempt to reconnect to the server in the event of an intermittent connection loss and this logic has been made more robust in this release. If the reconnection attempts fail before the network connection can be reestablished, then the user can still attempt to retry the connection manually by clicking the provided “Retry” button.

However, if the server-side state associated with the connect was also lost (e.g. the server was restarted) then clients will still be unable to connect. A common situation where this occurs is during development in Visual Studio. Visual Studio will watch the project for file changes and then rebuild and restart the app as changes occur. When this happens the server-side state associated with any connected clients is lost, so any attempt to reconnect with that state will fail. The only option is to reload the app and establish a new connection.

New in this release, the app will now also suggest that the user reload the browser when the connection is lost and reconnection fails.

Reload prompt

Culture aware data binding

Data-binding support (@bind) for <input> elements is now culture-aware. Data bound values will be formatted for display and parsed using the current culture as specified by the System.Globalization.CultureInfo.CurrentCulture property. This means that @bind will work correctly when the user’s desired culture has been set as the current culture, which is typically done using the ASP.NET Core localization middleware (see Localization).

You can also manually specify the culture to use for data binding using the new @bind:culture parameter, where the value of the parameter is a CultureInfo instance. For example, to bind using the invariant culture:

<input @bind="amount" @bind:culture="CultureInfo.InvariantCulture" />

The <input type="number" /> and <input type="date" /> field types will by default use CultureInfo.InvariantCulture and the formatting rules appropriate for these field types in the browser. These field types cannot contain free-form text and have a look and feel that is controller by the browser.

Other field types with specific formatting requirements include datetime-local, month, and week. These field types are not supported by Blazor at the time of writing because they are not supported by all major browsers.

Data binding now also includes support for binding to DateTime?, DateTimeOffset, and DateTimeOffset?.

Automatic generation of backing fields for @ref

The Razor compiler will now automatically generate a backing field for both element and component references when using @ref. You no longer need to define these fields manually:

<button @ref="myButton" @onclick="OnClicked">Click me</button>

<Counter @ref="myCounter" IncrementAmount="10" />

@code {
    void OnClicked() => Console.WriteLine($"I have a {myButton} and myCounter.IncrementAmount={myCounter.IncrementAmount}");
}

In some cases you may still want to manually create the backing field. For example, declaring the backing field manually is required when referencing generic components. To suppress backing field generation specify the @ref:suppressField parameter.

Razor Pages support for @attribute

Razor Pages now support the new @attribute directive for adding attributes to the generate page class.

For example, you can now specify that a page requires authorization like this:

@page
@attribute [Microsoft.AspNetCore.Authorization.Authorize]

<h1>Authorized users only!<h1>

<p>Hello @User.Identity.Name. You are authorized!</p>

New networking primitives for non-HTTP Servers

As part of the effort to decouple the components of Kestrel, we are introducing new networking primitives allowing you to add support for non-HTTP protocols.

You can bind to an endpoint (System.Net.EndPoint) by calling Bind on an IConnectionListenerFactory. This returns a IConnectionListener which can be used to accept new connections. Calling AcceptAsync returns a ConnectionContext with details on the connection. A ConnectionContext is similar to HttpContext except it represents a connection instead of an HTTP request and response.

The example below show a simple TCP Echo server hosted in a BackgroundService built using these new primitives.

public class TcpEchoServer : BackgroundService
{
    private readonly ILogger<TcpEchoServer> _logger;
    private readonly IConnectionListenerFactory _factory;
    private IConnectionListener _listener;

    public TcpEchoServer(ILogger<TcpEchoServer> logger, IConnectionListenerFactory factory)
    {
        _logger = logger;
        _factory = factory;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        _listener = await _factory.BindAsync(new IPEndPoint(IPAddress.Loopback, 6000), stoppingToken);

        while (true)
        {
            var connection = await _listener.AcceptAsync(stoppingToken);
            // AcceptAsync will return null upon disposing the listener
            if (connection == null)
            {
                break;
            }
            // In an actual server, ensure all accepted connections are disposed prior to completing
            _ = Echo(connection, stoppingToken);
        }

    }

    public override async Task StopAsync(CancellationToken cancellationToken)
    {
        await _listener.DisposeAsync();
    }

    private async Task Echo(ConnectionContext connection, CancellationToken stoppingToken)
    {
        try
        {
            var input = connection.Transport.Input;
            var output = connection.Transport.Output;

            await input.CopyToAsync(output, stoppingToken);
        }
        catch (OperationCanceledException)
        {
            _logger.LogInformation("Connection {ConnectionId} cancelled due to server shutdown", connection.ConnectionId);
        }
        catch (Exception e)
        {
            _logger.LogError(e, "Connection {ConnectionId} threw an exception", connection.ConnectionId);
        }
        finally
        {
            await connection.DisposeAsync();
            _logger.LogInformation("Connection {ConnectionId} disconnected", connection.ConnectionId);
        }
    }
}

Unix domain socket support for the Kestrel Sockets transport

We’ve updated the default sockets transport in Kestrel to add support Unix domain sockets (on Linux, macOS, and Windows 10, version 1803 and newer). To bind to a Unix socket, you can call the ListenUnixSocket() method on KestrelServerOptions.

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder
                .ConfigureKestrel(o =>
                {
                    o.ListenUnixSocket("/var/listen.sock");
                })
                .UseStartup<Startup>();
        });

gRPC support for CallCredentials

In preview8, we’ve added support for CallCredentials allowing for interoperability with existing libraries like Grpc.Auth that rely on CallCredentials.

Diagnostics improvements for gRPC

Support for Activity

The gRPC client and server use Activities to annotate inbound/outbound requests with baggage containing information about the current RPC operation. This information can be accessed by telemetry frameworks for distributed tracing and by logging frameworks.

EventCounters

The newly introduced Grpc.AspNetCore.Server and Grpc.Net.Client providers now emit the following event counters:

  • total-calls
  • current-calls
  • calls-failed
  • calls-deadline-exceeded
  • messages-sent
  • messages-received
  • calls-unimplemented

You can use the dotnet counters global tool to view the metrics emitted.

dotnet counters monitor -p <PID> Grpc.AspNetCore.Server

ServiceReference tooling in Visual Studio

We’ve added support in Visual Studio that makes it easier to manage references to other Protocol Buffers documents and Open API documents.

ServiceReference

When pointed at OpenAPI documents, the ServiceReference experience in Visual Studio can generated typed C#/TypeScript clients using NSwag.

When pointed at Protocol Buffer (.proto) files, the ServiceReference experience will Visual Studio can generate gRPC service stubs, gRPC clients, or message types using the Grpc.Tools package.

SignalR User Survey

We’re interested in how you use SignalR and the Azure SignalR Service, and your opinions on SignalR features. To that end, we’ve created a survey we’d like to invite any SignalR customer to complete. If you’re interested in talking to one of the engineers from the SignalR team about your ideas or feedback, we’ve provided an opportunity to enter your contact information in the survey, but that information is not required. Help us plan the next wave of SignalR features by providing your feedback in the survey.

Give feedback

We hope you enjoy the new features in this preview release of ASP.NET Core and Blazor! Please let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core and Blazor!

The post ASP.NET Core and Blazor updates in .NET Core 3.0 Preview 8 appeared first on ASP.NET Blog.


Announcing Entity Framework Core 3.0 Preview 8 and Entity Framework 6.3 Preview 8

$
0
0

The Preview 8 versions of the EF Core 3.0 package and the EF 6.3 package are now available for download from nuget.org.

New previews of .NET Core 3.0 and ASP.NET Core 3.0 are also available today.

Please install these previews to validate that all the functionality required by your applications is available and works correctly. 

Please report any issues you find as soon as possible to either the EF Core issue tracker or the EF 6 issue tracker on GitHub.

At this point, we are especially interested in hearing about any unexpected issues blocking you from upgrading to the new releases, but you are also welcome to try new functionality and provide general feedback.

What’s new in EF Core 3.0

Preview 8 includes fixes for 68 issues that we resolved since Preview 7. Here are a few highlights:

  • Some functionality that we had temporarily disabled while working on our new LINQ implementation is now restored. For example, compiling queries explicitly is now possible, and owned entity types are once again automatically included in query results.
  • Several existing and new APIs across EF Core have been tweaked as a result of API reviews.
  • Numerous APIs required by providers have been made public.
  • The new interception feature has been enhanced to allow intercepting DbCommand creation.
  • KeyAttribute is now scaffolded even if EF Core won’t use it, for example for composite keys (contributed by Erik Ejlskov Jensen).
  • Various improvements were made to the scaffolding of comments from SQL Server database objects (also contributed by Erik Ejlskov Jensen).
  • Support for scaffolding of views was extended to SQLite (also contributed by Erik Ejlskov Jensen).
  • EF.Functions.IsDate() was added for SQL Server (contributed by Rafael Almeida).
  • The Cosmos DB provider was upgraded to use the 3.0 RTM version of the Azure Cosmos SDK.
  • The SQL Server provider was upgraded to a newer preview version of Microsoft.Data.SqlClient.

Most of the major pieces of the new EF Core LINQ implementation are now in place. However, in Preview 8 the functionality of the in-memory provider is still very limited, and global query filters are still not correctly applied to navigation properties. The fix for the latter is available in daily builds, as well as fixes for many other issues.

Consider installing daily builds

In fact, we have already fixed more than 50 issues that aren’t included in Preview 8, and we are tracking several more to be fixed before RTM.

Given that we are fixing issues very rapidly, we recommend switching to a daily build to verify the latest fixes.

Detailed instructions to install daily builds, including the necessary NuGet feeds, can be found in the How to get daily builds of ASP.NET Core article.

Common workarounds for LINQ queries

The LINQ implementation in EF Core 3.0 is designed to work very differently from the one used in previous versions of EF Core, and in some areas, it’s still a work in progress. For these reasons, you are likely to run into issues with LINQ queries, especially when upgrading existing applications. Here are some workarounds that might help you get things working:

  • Try a daily build (as previously mentioned) to confirm that you aren’t hitting an issue that has already been fixed.
  • Switch to client evaluation explicitly: If your query filters data based on an expression that cannot be translated to SQL, you may need to switch to client evaluation explicitly by inserting a call to either AsEnumerable()AsAsyncEnumerable()ToList(), or ToListAsync() in the middle of the query. For example, the following query will no longer work in EF Core 3.0 because one of the predicates in the where clause requires client evaluation:
    var specialCustomers = context.Customers
      .Where(c => c.Name.StartsWith(n) && IsSpecialCustomer(c));

    But if you know it is reasonable to process part of the filter on the client, you can rewrite the query as:

    var specialCustomers = context.Customers
         .Where(c => c.Name.StartsWith(n))
         .AsEnumerable() // Start using LINQ to Objects (switch to client evaluation)
         .Where(c => IsSpecialCustomer(c));

    Remember that this is by-design: In EF Core 3.0, LINQ operations that cannot be translated to SQL are no longer automatically evaluated on the client.

  • Use raw SQL queries: If some expression in your LINQ query is not translated correctly (or at all) to SQL, but you know what translation you would want to have generated, you may be able to work around the issue by executing your own SQL statement using the FromSqlRaw() or FromSqlInterpolated() methods.

    Also make sure an issue exists in our issue tracker on GitHub to support the translation of the specific expression.

Breaking changes

All breaking changes in this release are listed in the Breaking Changes in EF Core 3.0 article. We keep the list up to date on every preview, with the most impactful changes near the top of the list, to make it easier for you to react.

Obtaining the Preview 8 packages

EF Core 3.0 is distributed exclusively as NuGet packages. As usual, add or upgrade the runtime to preview 8 via the NuGet user interface, the Package Manager Console in Visual Studio, or via the dotnet add package command. In all cases, include the option to allow installing pre-release versions. For example, you can execute the following command to install the SQL Server provider:

dotnet add package Microsoft.EntityFrameworkCore.SqlServer --version 3.0.0-*

With .NET Core 3.0, the dotnet ef command-line tool is no longer included in the .NET Core SDK. Before you can execute EF Core migration or scaffolding commands, you’ll have to install it as either a global or local tool. Due to limitations in dotnet tool install, installing preview tools requires specifying at least part of the preview version on the installation command. For example, to install the 3.0 Preview 8 version of dotnet ef as a global tool, execute the following command:

dotnet tool install --global dotnet-ef --version 3.0.0-*

What’s new in EF 6.3 Preview 8

With the added ability to execute migration commands against .NET Core projects in Visual Studio, most of the work planned for the EF 6.3 package has been completed. We are now focused on fixing any bugs that arise.

As with EF Core, any bug fixes that happened after we branched for Preview 8 are available in our daily builds.

On the tooling side, we plan to release an updated EF6 designer in an upcoming update of Visual Studio 2019 which will work with projects that target .NET Core (tracked in issue #883).

Until this new version of the designer is available, we recommend that you work with your EDMX files inside projects that target .NET Framework and then copy the final version of your EDMX and related files to your .NET Core project. Note that due to a bug in current builds of Visual Studio, copying the files from within Solution Explorer may cause hangs. Copy the files using Windows Explorer or from the command line instead.

We are also seeking feedback and possible contributions to enable a cross-platform command line experience for migrations commands, similar to dotnet ef but for EF6 (tracked in issue #1053). If you would like to see this happen, or if you would like to contribute to it, please vote or comment on the issue.

Again, if you find any unexpected issues with EF 6.3 preview 8, please report it to our GitHub issue tracker.

Weekly status updates

If you’d like to track our progress more closely, we now publish weekly status updates to GitHub. We also post these status updates to our Twitter account, @efmagicunicorns.

Thank you

As usual, thank you for trying our preview bits, and for all the bug reports and contributions that will help make EF Core 3.0 and EF 6.3 much better releases.

The post Announcing Entity Framework Core 3.0 Preview 8 and Entity Framework 6.3 Preview 8 appeared first on .NET Blog.

Announcing .NET Core 3.0 Preview 8

$
0
0

Today, we are announcing .NET Core 3.0 Preview 8. Just like with Preview 7, we’ve focused on polishing .NET Core 3.0 for a final release and are not adding new features. If these final previews seem anti-climatic, that’s by design.

Download .NET Core 3.0 Preview 8 right now on Windows, macOS and Linux.

ASP.NET Core and EF Core are also releasing updates today.

Details:

The Microsoft .NET Site has been updated to .NET Core 3.0 Preview 8 (see the .NET Core runtime version in the footer text). It’s been running successfully on Preview 8 for over two weeks, on Azure WebApps (as a self-contained app). We will move the site to Preview 9 builds shortly.

If you missed it, check out the improvements we released in .NET Core 3.0 Preview 7, from last month.

Go Live

NET Core 3.0 Preview 8 is supported by Microsoft and can be used in production. We strongly recommend that you test your app running on Preview 8 before deploying Preview 8 into production. If you find an issue with .NET Core 3.0, please file a GitHub issue and/or contact Microsoft support.

If you are using .NET Core 3.0 Preview 7, you need to move to Preview 8 for “Go Live” support.

Closing

The .NET Core 3.0 release is coming close to completion, and the team is solely focused on stability and reliability now that we’re no longer building new features. Please tell us about any issues you find, ideally as quickly as possible. We want to get as many fixes in as possible before we ship the final 3.0 release.

If you install daily builds, please read an important PSA on .NET Core master branches.

The post Announcing .NET Core 3.0 Preview 8 appeared first on .NET Blog.

.NET Framework 4.8 is available on Windows Update, WSUS and MU Catalog

$
0
0

We are happy to announce that Microsoft .NET Framework 4.8 is now available on Windows Update, Windows Server Update Services (WSUS) and Microsoft Update (MU) Catalog. This release includes quality and reliability fixes based on feedback since the .NET Framework 4.8 initial release.

.NET Framework 4.8 is available for the following client and server platforms:

  • Windows Client versions: Windows 10 version 1903, Windows 10 version 1809, Windows 10 version 1803, Windows 10 version 1709, Windows 10 version 1703, Windows 10 version 1607, Windows 8.1, Windows 7 SP1
  • Windows Server versions: Windows Server 2019, Windows Server version 1809, Windows Server version 1803, Windows Server 2016, Windows Server 2012, Windows Server 2012 R2, Windows Server 2008 R2 SP1

Note: Windows 10 May 2019 Update ships with .NET Framework 4.8 already included.

The updated .NET Framework 4.8 installers (which include the additional quality and reliability fixes) are available for download.

The updated .NET Framework 4.8 can be installed on top of the initial release of .NET Framework 4.8 from April this year. All download links are provided at .NET Framework downloads.

Quality and Reliability Fixes

The following fixes are included in this update:

ASP.NET:

  • Fixed System.Web.Caching initialization bug when using ASP.NET cache on machines without IIS. [889110, System.Web.dll, Bug]

 Windows Forms:

  • Fixed the ability to select ComboBox edit field text using mouse down+move [853381, System.Windows.Forms.dll, Bug]
  • Fixed the issue with interaction between WPF user control and hosting WinForms app when processing keyboard input. [899206, WindowsFormsIntegration.dll, Bug]
  • Fixed the issue with Narrator/NVDA announcing of PropertyGrid’s ComboBox expanding and collapsing action. [792617, System.Windows.Forms.dll, Bug]
  • Fixed the issue with rendering “…” button of PropertyGrid control in HC mode to draw button background and dots contrasted. [792780, System.Windows.Forms.dll, Bug]

WPF:

  • Fixed a handle leak during creation of a Window in WPF applications that are manifested for Per Monitor DPI V2 Awareness.  This leak may lead to extraneous GC.Collect calls that can impact performance in Window creation scenarios.  [845699, PresentationFramework.dll, Bug]
  • Fixed a regression caused by the bug fix involving bindings with DataContext explicitly on the binding path. [850536, PresentationFramework.dll, Bug]
  • Fixed crash due to ArgumentNullException when loading a DataGrid containing a ComboBox while automation is active.  For example, when navigating Visual Studio to the Text EditorC#Code StyleNaming page in ToolsOptions. [801039, PresentationFramework.dll, Bug]

You can see the complete list of improvements for .NET Framework 4.8 in the .NET Framework 4.8 release notes.

Knowledge Base Articles

You can reference the following Knowledge Base Articles for the WU/WSUS/Catalog release:

OS Platform .NET Framework 4.8 Redistributable .NET Framework 4.8 Language Pack
Windows 7 SP1/Windows Server 2008 R2 KB4503548 KB4497410
Windows Server 2012 KB4486081 KB4087513
Windows 8.1/Windows Server 2012 R2 KB4486105 KB4087514
Windows 10 Version 1607 KB4486129 (Catalog Only) KB4087515 (Catalog Only)
Windows 10 Version 1703 KB4486129 KB4087515
Windows Server 2016 KB4486129 (Catalog Only) KB4087515 (Catalog Only)
Windows 10 Version 1709 KB4486153 KB4087642
Windows 10 Version 1803 KB4486153 KB4087642
Windows Server, version 1803 KB4486153 KB4087642
Windows 10 Version 1809 KB4486153 KB4087642
Windows Server, version 1809 KB4486153 (Catalog Only) KB4087642 (Catalog Only)
Windows Server 2019 KB4486153 (Catalog Only) KB4087642 (Catalog Only)

 

How is this release available?

Automatic Updates

.NET Framework 4.8 is being offered as a Recommended update. The reliability fixes for .NET Framework 4.8 will be co-installed with .NET Framework 4.8. At this time, we’re throttling the release as we have done with previous .NET Framework releases. Over the next few weeks we will be closely monitoring your feedback and will gradually open throttling.

While the release is throttled, you can use the Check for updates feature to get .NET Framework 4.8. Open your Windows Update settings (Settings > Update & Security > Windows Update) and select Check for updates.

Note: Throttled updates are offered at a lower priority than unthrottled updates, so if you have other Recommended or Important updates pending those will be offered before this update.

Once we open throttling, in most cases you will get the .NET Framework 4.8 with no further action necessary. If you have modified your AU settings to notify but not install, you should see a notification in the system tray about this update.

The deployment will be rolled out to various geographies globally over several weeks. So, if you do not get the update offered on the first day and do not want to wait until the update is offered, you can use the instructions above to check for updates or download from here.

Windows Server Update Services (WSUS) and Catalog

WSUS administrators will see this update in their WSUS admin console. The update is also available in the MU Catalog for download and deployment.

When you synchronize your WSUS server with Microsoft Update server (or use the Microsoft Update Catalog site for importing updates), you will see the updates for .NET Framework 4.8 published for each platform.

Dot.NET Site

.NET Framework 4.8 can be downloaded and installed manually on all supported platforms using the links from here.

More Information

Language Packs

In addition to the language neutral package, the .NET Framework 4.8 Language Packs are also available on Windows Update. These can be used if you have a previous language pack for .NET Framework installed as well as if you don’t, but instead have a localized version of the base operating system or have one or more Multilingual User Interface (MUI) pack installed.

Blocking the automatic deployment of .NET 4.8

Enterprises may have client machines that connect directly to the public Windows Update servers rather than to an internal WSUS server. In such cases, an administrator may have a need to prevent the .NET Framework 4.8 from being deployed to these client machines to allow testing of internal applications to be completed before deployment.

In such scenarios, administrators can deploy a registry key to machines and prevent the .NET Framework 4.8 from being offered to those machines. More information about how to use this blocker registry key can be found in the following Microsoft Knowledge Base article KB4516563: How to temporarily block the installation of the .NET Framework 4.8.

FAQ

What do I need to do if I already have .NET Framework 4.8 product installed and want the reliability fixes?

If you installed .NET Framework 4.8 via Download site earlier, then you need to reinstall the product using the links at the top of the blog.

Do I still need to install updated .NET Framework 4.8 if I am getting .NET 4.8 from Windows Update/WSUS?

No, .NET Framework 4.8 via Windows Update and WSUS will install the product and the included reliability fixes.

Will Windows Update offer the updated .NET Framework 4.8 if I already have the RTM version (4.7.3761) of .NET 4.8 installed?

Yes, Windows Update will offer the .NET 4.8 product update to machines that have the RTM version (4.7.3761) of the product already installed. After the update you will see the new version (4.7.3928) of files that were included for the reliability fixes.

Will the RTM version (4.7.3761) of the .NET Framework 4.8 installers still work if I had downloaded them earlier?

Yes, the installers will still work, but we recommend that you download the latest versions of the installers as per the links above.

Will the detection key (Release Key) for the product change after I install the updated .NET Framework 4.8?

No, the Release key value in the registry will remain the same. See here for guidance on product detection and release key value for .NET 4.8.

How can I get the reliability fixes for Windows 10 May 2019 Update (Version 1903)?

These reliability fixes will be available via the next .NET Framework Cumulative update for Windows 10 May Update (Version 1903).

 

 

The post .NET Framework 4.8 is available on Windows Update, WSUS and MU Catalog appeared first on .NET Blog.

Visual Studio 2019 version 16.3 Preview 2 and Visual Studio for Mac version 8.3 Preview 2 Released!

$
0
0

Visual Studio version 16.3 Preview 2 and Visual Studio for Mac version 8.3 Preview 2 are available. Because many of the features in this release are a response to Developer Community feedback, we are excited to share our changes. First of all, the latest Preview versions both PC and Mac are available to download from VisualStudio.com. You can also upgrade internally by clicking the notification bell within Visual Studio 2019. Likewise, within the Visual Studio 2019 for Mac IDE, click the Visual Studio > Check for Updates menu item. Our release notes contain many more details, so make sure to take a look.

Be More Productive with Visual Studio Container Tools

Developers creating serverless solutions using Azure Functions (v2) can now add Docker container support to their C# projects. Among other benefits, this makes Azure Functions much more portable. Furthermore, the Container Tools enhance productivity by easily containerizing Azure Functions in a Linux container. To try it out, right-click the project name in Solution Explorer and select Add > Docker Support.

We didn’t stop there! First, the tools add a Dockerfile to the project and automatically builds the Docker image. Next, you can ensure your code works as expected with the added the ability to debug Azure Functions running inside the container. Make sure the debug target is set to Docker to enable this feature. Moreover, you can set breakpoints, inspect variables, and step through your Azure Functions.

Visual Studio 2019 Container Tools
Visual Studio 2019 Container Tools

 

Find What You Need Faster with Installer Search and IDE New Project Dialog Labels

A search box in the Visual Studio Installer’s Individual components tab allows you to quickly locate all available components for installation. In addition, updating Visual Studio will automatically install updates to the Visual Studio Installer without interrupting the update process for the IDE. 

Search box in Visual Studio 2019 Installer
Search Box in the Visual Studio 2019 Installer

 

A New label highlights recently installed project templates, thus, making identification easier. As well, filters show selected values in the New Project Dialog.  You can easily organize recently used templates by pinning, unpinning, and removing them from the Recent project templates list.

New Label and Pin Features in the New Project Dialog
New Label and Pin View of the New Project Dialog.

Productivity Improvements in C++

On the C++ side, we’ve made a variety of productivity improvements including semantic colorization and on-by-default IntelliCode. Next, we’ve added support for parallel builds in MSBuild-based Linux projects that leverage the native WSL experience.  Finally, there are new C++ Core Guideline rules. You can learn more about what’s new on the C++ Team Blog. We frequently release in-depth blog posts delving into these features throughout the week.

Rapidly Iterate on your UIs with the Public Preview of XAML Hot Reload for Xamarin.Forms

In July, we announced the private preview of XAML Hot Reload for Xamarin.Forms in Visual Studio 2019 and Visual Studio 2019 for Mac. XAML Hot Reload enables you to quickly make changes to your XAML UI and see them reflected without requiring another build and deploy. Notably, this feature requires no setup.  Start debugging, change your XAML, and hit Save to go live. Thanks to the amazing feedback from the private preview, we were able to iterate quickly on the tool.  Consequently, we are excited to launch our public preview as part of this release! Start using it today by enabling it in Tools > Options > Xamarin > Hot Reload. Learn more in our blog about XAML Hot Reload for Xamarin.Forms and how to move off the private preview.

XAML Hot Reload for Xamarin.Forms
XAML Hot Reload for Xamarin.Forms

Browse Selection for ASP.NET Core Projects in Visual Studio 2019 for Mac

Also available today is Visual Studio 2019 for Mac version 8.3 Preview 2. As part of our continual focus on making the .NET Core experience better on the Mac, we’ve made it easier to launch your ASP.NET Core project in browsers that aren’t your default macOS browser. For example, you can now select a particular browser to be used for run or debug via the configuration selector when working with ASP.NET Core web projects in Visual Studio 2019 for Mac.

We Can’t Wait for you to Give it a Try!

We encourage you to update to Visual Studio 2019 version 16.3 Preview 2 by downloading directly from VisualStudio.com or, likewise, through the notification bell inside of a previous Preview Channel release. Similarly, to get the Preview of Visual Studio 2019 for Mac version 8.3 Preview 2, use the updater via the IDE’s Visual Studio > Check for Updates menu item.

Finally, we love feedback. Visual Studio Developer Community is the most effective way to address issues and suggest improvements. Within out robust community, you can track your issues, suggest a feature, ask questions, and find answers from other developers. On behalf of our entire team, thank you for being so engaged in the life of our product.

The post Visual Studio 2019 version 16.3 Preview 2 and Visual Studio for Mac version 8.3 Preview 2 Released! appeared first on The Visual Studio Blog.

What’s New for Python in Visual Studio (16.3 Preview 2)

$
0
0

Today, we are releasing Visual Studio 2019 (16.3 Preview 2), which contains an updated testing experience for Python developersWe are happy to announce that the popular Python testing framework pytest is now supported. Additionally, we have re-worked the unittest experience for Python users in this release. 

Continue reading to learn more about how you can enable and configure pytest and/or unittest for your development environment. What’s even better is that each testing framework is supported in both project mode and in Open Folder scenarios.   

 

Enabling and Configuring Testing for Projects

Configuring and working with Python tests in Visual Studio is easier than ever before.

 For users who are new to the testing experience within Visual Studio 2019 for Python projects, right-click on the project name and select the ‘Properties’ option. This option opens the project designer, which allows you to configure tests by going to the ‘Test’ tab.

From this tab, simply click the ’Test Framework’ dropdown box to select the testing framework you wish to use:

Walkthrough of New Testing Features in VS2019 16.3

  • For unittest, we use the project’s root directory for test discovery. This is a default setting that can be modified to include the path to the folder that contains your tests (if your tests are included in a sub-directory). We also use the unittest framework’s default pattern for test filenames (this also can be modified if you use a different file naming system for your test files). Prior to this release, unittest discovery was automatically initiated for the user. Now, the user is required to manually configure testing.
  • For pytest, you can specify a .ini configuration file which contains test filename patterns in addition to many other testing options.

Once you select and save your changes in the window, test discovery is initiated in the Test Explorer. If the Test Explorer is not open, navigate to the Toolbar and select Test > Windows > Test Explorer. Test Discovery can take up to 60 seconds, after which the test discovery process will end.

Once in the Test Explorer window, you have the ability to re-run your tests (by clicking the ‘Run All’ button or pressing CTRL + R,A) as well as view the status of your test runs.  Additionally, you can see the total number of tests your project contains and the duration of test runs:

to show the status of test runs in VS

If you wish to keep working while tests are running in the background but want to monitor the progress of your test run, you can go to the Output window and choose ‘Show output from: Tests’:

show the user how to select the tests output

We have also made it simple for users with pre-existing projects that contain test files to quickly continue working with their code in Visual Studio 2019. When you open a project that contains testing configuration files (e.g. a .ini file for pytest), but you have not installed or enabled pytest, you will be prompted to install the necessary packages and configure them for the Python environment in which you are working:

install and enable pytest infobar

For open folder scenarios (described below), these informational bars will also be triggered if you have not configured your workspace for pytest or unittest.

 

Configuring Tests for Open Folder Scenarios

In this release of Visual Studio 2019, users can configure tests to work in our popular open folder scenario.

To configure and enable tests, navigate to the Solution explorer, click the “Show All Files” icon to show all files in the current folder and select the PythonSettings.json file within the ‘Local Settings’ folder. (If this file doesn’t exist, create one in ‘Local Settings’ folder). Next, add the field TestFramework: “pytest” to your settings file or TestFramework: “unittest” depending on the testing framework you wish to use.

json settings file for tests

 

For the unittest framework, If UnitTestRootDirectory and/or UnitTestPattern are not specified in PythonSettings.json, they are added and assigned default values of “.” and “test*.py”, respectively.

As in project mode, editing and saving any file triggers test discovery for the test framework that you specified. If you already have the Test Explorer window open, clicking CTRL + R,A also triggers discovery.

Note: If your folder contains a ‘src’ directory which is separate from the folder that contains your tests, you’ll need to specify the path to the src folder in your PythonSettings.json with the setting SearchPaths:

add searchpaths settings to json file

 

 

Debugging Tests

In this latest release, we’ve updated test debugging to use our new ptvsd 4 debugger, which is faster and more reliable than ptvsd 3. We’ve added an option so that you can use the legacy debugger if you run into issues. To enable it, go to Tools > Options > Python > Debugging > Use Legacy Debugger and check the box to enable it.

As in previous releases, if you wish to debug a test, set an initial breakpoint in your code, then right-click the test (or a selection) in Test Explorer and select Debug Selected Tests. Visual Studio starts the Python debugger as it would for application code.

debug python tests in VS2019 16.3 P2

Note: everyone that tries to debug a test will find that the debugging does not automatically end when the debugging session completes. This is a known issue and the current workaround is to click ‘Stop Debugging’ (Shift + F5).

There’s also the ‘Test Detail Summary’ view that allows you to see the Stack Trace of a failed test which makes troubleshooting failed tests even easier. To access this view, simply click on the test within the Test Explorer that you wish to inspect and the ‘Test Detail Summary’ window will appear.

Test Detail Summary View for VS2019 Preview 2

 

Try it Out!

Be sure to download Visual Studio 2019 (16.3 Preview 2), install the Python Workload, and give feedback or view a list of existing issues on our GitHub repo.

The post What’s New for Python in Visual Studio (16.3 Preview 2) appeared first on Python.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>