Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Game performance and compilation time improvements in Visual Studio 2019

$
0
0

The C++ compiler in Visual Studio 2019 includes several new optimizations and improvements geared towards increasing the performance of games and making game developers more productive by reducing the compilation time of large projects. Although the focus of this blog post is on the game industry, these improvements apply to most C++ applications and C++ developers.

Compilation time improvements

One of the focus points of the C++ toolset team in the VS 2019 release is improving linking time, which in turn allows faster iteration builds and quicker debugging. Two significant changes to the linker help speed up the generation of debug information (PDB files):

  • Type pruning in the backend removes type information that is not referenced by any variables and reduces the amount of work the linker must do during type merging.
  • Speed up type merging by using a fast hash function to identify identical types.

The table below shows the speedup measured in linking a large, popular AAA game:

Debug build
configuration

Linking time (sec)
VS 2017 (15.9)
Linking time (sec)
VS 2019 (16.0)
Linking time speedup
/DEBUG:full

392.1

163.3

2.40x

/DEBUG:fastlink 72.3 31.2

2.32x

 

More details and additional benchmarks will be published soon in a blog post that will be linked here.

Vector (SIMD) expression optimizations

One of the most significant improvements in the code optimizer is handling of vector (SIMD) intrinsics, both from source code and as a result of automated vectorization. In VS 2017 and prior, most vector operations would go through the main optimizer without any special handling, similar to function calls, although they are represented as intrinsics – special functions known to the compiler. Starting with VS 2019, most expressions involving vector intrinsics are optimized just like regular integer/float code using the SSA optimizer.

Both float (eg. _mm_add_ps) and integer (eg. _mm_add_epi32) versions of the intrinsics are supported, targeting the SSE/SSE2 and AVX/AVX2 instruction sets. Some of the performed optimizations, among many others:

  • constant folding
  • arithmetic simplifications, including reassociation
  • handling of cmp, min/max, abs, extract operations
  • converting vector to scalar operations if profitable
  • patterns for shuffle and pack operations

Other optimizations, such as common sub-expression elimination, can now take advantage of a better understanding of load/store vector operations, which are handled like regular loads/stores. Several ways of initializing a vector register are recognized and the values are used during the expression simplifications (eg. _mm_set_ps, _mm_set_ps1, _mm_setr_ps, _mm_setzero_ps for float values).

Another important addition is the generation of fused multiply-add (FMA) for vector intrinsics when the /arch:AVX2 compiler flag is used – previously it was done only for scalar float code. This allows the CPU to compute the expression a*b + c in fewer cycles, which can be a significant speedup in math-heavy code, as one of the examples below is showing.

The following code exemplifies both the generation of FMA with /arch:AVX2 and the expression optimizations when /fp:fast is used:

__m128 test(float a, float b) {
    __m128 va = _mm_set1_ps(a);
    __m128 vb = _mm_set1_ps(b);
    __m128 vd = _mm_set1_ps(-b);

    // Computes (va * vb) + (va * -vb)
    return _mm_add_ps(_mm_mul_ps(va, vb),_mm_mul_ps(va, vd));
}

No simplifications are done; FMA not generated.

VS 2017 /arch:AVX2 /fp:fast
vmovaps xmm3, xmm0
vbroadcastss xmm3, xmm0

vxorps xmm0, xmm1, DWORD PTR __xmm@80000000800000008000000080000000
vbroadcastss xmm0, xmm0
vmulps xmm2, xmm0, xmm3
vbroadcastss xmm1, xmm1
vmulps xmm0, xmm1, xmm3
vaddps xmm0, xmm2, xmm0
ret 0
No simplifications done – not legal under /fp:precise; FMA generated. VS 2019 /arch:AVX2
vmovaps xmm2, xmm0
vbroadcastss xmm2, xmm0
vmovaps xmm0, xmm1
vbroadcastss xmm0, xmm1
vxorps xmm1, xmm1, DWORD PTR __xmm@80000000800000008000000080000000
vbroadcastss xmm1, xmm1
vmulps xmm0, xmm0, xmm2
vfmadd231ps xmm0, xmm1, xmm2
ret 0
Entire expression simplified to “return 0” since /fp:fast allows applying the usual arithmetic rules. VS 2019 /arch:AVX2 /fp:fast

vxorps xmm0, xmm0, xmm0
ret 0

 

More examples can be found in this older blog post, which discusses the SIMD generation of several compilers – VS 2019 now handles all the cases as expected, and a lot more!

Benchmarking the vector optimizations

For measuring the benefit of the vector optimizations, Xbox ATG (Advanced Technology Group) provided a benchmark based on code from Unreal Engine 4 for commonly used mathematical operations, such as SIMD expressions, vector/matrix transformations and sin/cos/sqrt functions. The tests are a combination of cases where the values are constants and cases where the values are unknown at compile time. This tests the common scenario where the values are not known at compile-time, but also the situation that arises usually after inlining when some values turn out to be constants.

The table below shows the speedup of the tests grouped into four categories, the execution time (milliseconds) being the sum of all tests in the category. The next table shows the improvements for a few individual tests when using unknown, random values – the versions that use constants are folded now as expected.

Category

VS 2017 (ms) VS 2019 (ms)

Speedup

Math 482 366 27.36%
Vector 337 238 34.43%
Matrix 3168 3158 0.32%
Trigonometry 3268 1882 53.83%

 

Test

VS 2017 (ms) VS 2019 (ms)

Speedup

VectorDot3

42

39

7.4%

MatrixMultiply

204

194

5%

VectorCRTSin

421

402

4.6%

NormalizeSqrt

82

77

7.4%

NormalizeInvSqrt

106

97

8.8%

 

Improvements in Unreal Engine 4 – Infiltrator Demo

To ensure that our efforts benefit actual games and not just micro-benchmarks, we used the Infiltrator Demo as a representative for an AAA game based on Unreal Engine 4.21. Being mostly a cinematic sequence rendered in real-time, with complex graphics, animations and physics, the execution profile is similar to an actual game; at the same time it is a great target for getting the stable, reproducible results needed to investigate performance and measure the impact of compiler improvements.

The main way of measuring a game’s performance is using the frame time. Frame times can be viewed as the inverse of FPS (frames per second), representing the time it takes to prepare one frame to be displayed, lower values being better. The two main threads in Unreal Engine are the gaming thread and rendering thread – this work focuses mostly on the gaming thread performance.

There are four builds being tested, all based on the default Unreal Engine settings, which use unity (jumbo) builds and have /fp:fast /favor:AMD64 enabled. Note that the AVX2 instruction set is being used, except for one build that keeps the default AVX:

  • VS 2017 (15.9) with /arch:AVX2
  • VS 2019 (16.0) with /arch:AVX2
  • VS 2019 (16.0) with /arch:AVX2 and /LTCG, to showcase the benefit of using link time code generation
  • VS 2019 (16.0) with /arch:AVX, to showcase the benefit of using AVX2 over AVX

Testing details:

  • To capture frame times, a custom ETW provider was integrated into the game to report the values to Xperf running in the background. Each build of the game has one warm-up run, then 10 runs of the entire game with ETW tracing enabled. The final frame time is computed, for each 0.5 second interval, as the average of these 10 runs. The process is automated by a script that starts the game once and after each iteration restarts the level from the beginning. Out of the 210 seconds (3:30m) long demo, the first 170 seconds are captured.
  • Test PC configuration:
    • AMD Ryzen 2700x CPU (8 cores/16 threads) fixed at 3.4Ghz to eliminate noise from dynamic frequency scaling
    • AMD Radeon RX 470 GPU
    • 32 GB DDR4-2400 RAM
    • Windows 10 1809
    • The game runs at a resolution of 640×480 to reduce the impact the GPU rendering may have

Results:

The chart below shows the measured frame times up to second 170 for the four tested builds of the game. Frame time ranges from 4ms to 15ms in the more graphic intensive part around seconds 155-165. To make the difference between builds more obvious, the “fastest” and “slowest” sections are zoomed in. As mentioned before, a lower frame time value is better.

Graph showing the frame time over the duration of the game

The following table summarizes the results, both as an average over the entire game and by focusing on the “slow” section, where the largest improvement can be seen:

Improvement

VS 2019 AVX2
vs. VS 2017 AVX2
VS 2019 LTCG AVX2
vs. VS 2019 AVX2
VS 2019 AVX
vs. VS 2019 AVX2

Average

0.7%

0.9%

-1.8%

Largest 2.8% 3.2%

-8.5%

 

  • VS 2019 improves frame time up to 2.8% over VS 2017
  • An LTCG build improves frame time up to 3.2% compared to the default unity build
  • Using AVX2 over AVX shows a significant frame time improvement, up to 8.5%, in large part a result of the compiler automatically generating FMA instructions for scalar, and now in 16.0, vector operations.

The performance in different parts of the game can be seen easier by computing the speedup of one build relative to another, as a percentage. The following charts show the results when comparing the frame times for the 16.0/15.9 and AVX/AVX2 builds – the X axis is the time in the game, Y axis is the frame time improvement percentage: Image showing the improvement between 16.0 and 15.9

Image showing the improvement between 16.0 AVX2 and 16.0 AVX

More optimizations

Besides the vector instruction optimizations, VS 2019 has several new optimizations that help both games and C++ programs in general:

  • Useless struct/class copies are being removed in several more cases, including copies to output parameters and functions returning an object. This optimization is especially effective in C++ programs that pass objects by value.
  • Added a more powerful analysis for extracting information about variables from control flow (if/else/switch statements), used to remove branches that can be proven to be always true or false and to improve the variable range estimation.
  • Unrolled, constant-length memsets will now use 16-byte store instructions (or 32 byte for /arch:AVX).
  • Several new scalar FMA patterns are identified with /arch:AVX2. These include the following common expressions: (x + 1.0) * y; (x – 1.0) * y; (1.0 – x) * y; (-1.0 – x) * y.
  • A more comprehensive list of backend improvements can be found in this blog post.

We’d love for you to download Visual Studio 2019 and give it a try. As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter problems with Visual Studio or MSVC, or have a suggestion for us, please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer Community. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).

The post Game performance and compilation time improvements in Visual Studio 2019 appeared first on C++ Team Blog.


Reducing security alert fatigue using machine learning in Azure Sentinel

$
0
0

Last week we launched Azure Sentinel, a cloud native SIEM tool. Machine learning (ML) in Azure Sentinel is built-in right from the beginning. We have thoughtfully designed the system with ML innovations aimed to make security analysts, security data scientists, and engineers productive. The focus is to reduce alert fatigue and offer ML toolkits tailored to the security community. The three ML pillars in Azure Sentinel include Fusion, built-in ML, build your own ML.

Fusion

Alert fatigue is real. Security analysts face a huge burden of triage as they not only have to sift through a sea of alerts, but also correlate alerts from different products manually or using a traditional correlation engine.

Our Fusion technology, currently in public preview, uses state of the art scalable learning algorithms to correlate millions of lower fidelity anomalous activities into tens of high fidelity cases. Azure Sentinel integrates with Microsoft 365 solution and correlates millions of signals from different products such as Azure Identity Protection, Microsoft Cloud App Security, and soon Azure Advanced Threat Protection, Windows Advanced Threat Protection, O365 Advanced Threat Protection, Intune, and Azure Information Protection. You can learn how to turn Fusion on by visiting our documentation, “Enable Fusion.”

Screenshot of fusion and two composite alerts

Fusion combines yellow alerts, which themselves may not be actionable, into high fidelity security interesting red cases. We look at disparate products to produce actionable incidents so as to reduce the false positive rate. From our measurement with external customers and internal evaluation, we have a median 90 percent reduction in alert fatigue. This is possible because Fusion can detect complex, multi-stage attacks and differs from traditional correlation engines in the following ways:

Traditional correlation engines

Fusion

Assume that the attacker takes only one path to attain their goal.

Iterative attack simulation - Fusion encodes uncertainty with paths/stages by simulating different attack paths using an iterative arkov chain Monte Carlo simulations.

Assumes the attacker follows a static kill chain, as the attack path is executed.

Probabilistic cloud kill chain – Fusion constantly updates the probability of moving to the next step in kill chain through a custom defined prior probability function.

Assumes that all the information is present in the logs to catch the attacker.

Using advances in graphical methods – we encode uncertainty in completeness/connectivity of information in the kill chain helping us to detect novel attacks.

In the above screenshot, one can see that the Fusion case, and the two composite alerts that went into it.

Organizations are currently using Fusion for the following scenarios to compound anomalies from Identity Protection and Microsoft Cloud App Security products.

  • Anomalous login leading to O365 mailbox exfiltration
  • Anomalous login leading to suspicious cloud app administrative activity
  • Anomalous login leading to mass file deletion
  • Anomalous login leading to mass file download
  • Anomalous login leading to O365 impersonation
  • Anomalous login leading to mass file sharing
  • Anomalous login leading to ransomware in cloud app

Built-in ML

Machine learning is now an essential toolkit in security analytics to detect novel types of attacks that escape the traditional rules based system. However, a scarce ML talent pool makes it difficult for security organizations to staff applied security data scientists. To democratize the ML toolkit tailored to the needs of the security community, we introduce built-in ML which is currently in limited public preview.

Built-in ML is designed for security analysts and engineers, with no prior ML knowledge to reuse ML systems designed by Microsoft’s fleet of security machine learning engineers. The benefits of built-inML systems are that organizations dont have to worry about traditional investments like ML training cross validation, or deployment and quickly identify threats that wouldnt be found with a traditional approach.

Behind the cover, built-in ML uses principles of model compression and elements of transfer learning to make the model developed by Microsoft’s ML engineers ready to use for any organization’s needs. Our models are trained on diverse datasets, and periodically retrained to take concept drift into account.

We are opening our flagship geo login anomaly model for any security analyst to use to detect unusual logins in SSH logs. No ML expertise is necessary, customers bring in their logs to Azure Sentinel and use built-in ML systems to gain analysis instantly.

Build-your-own ML

We recognize that organizations have different levels of investments in machine learning for security use cases. Some organizations may have data scientists who need to go deeper and customize the analysis further. For these organizations, we offer the option of Build-you-own ML to author security analytics.

Azure Sentinel will offer Databricks, Spark, and Jupyter Notebook detection’s authoring environment, in order to take care of data plumbing, provide ML algorithm in templates, code snippets for model training and scheduling, and soon introduce seamless model management, model deployment, workflow scheduler, data versioning capabilities and specialized security analytics libraries. This will free up security data scientists from tedious pipeline and platform work, and focus on productive analytics on a hyper scale ML-security platform.

Additional resources

We will be updating this space with the technical details behind these innovations! If you have questions about turning on built-in ML or using build-your-own ML infrastructure, please reach out to askepd@microsoft.com. We also strongly recommend customers enable Fusion when they use Azure Sentinel. You can learn how to turn Fusion on by visiting our documentation, “Enable Fusion.”

Securely monitoring your Azure Database for PostgreSQL Query Store

$
0
0

A few months ago, I shared best practices for alerting on metrics with Azure Database for PostgreSQL. Though I was able to cover how to monitor certain key metrics on Azure Database for PostgreSQL, I did not cover how to monitor and alert on the performance of queries that your application is heavily relying on. As a PostgreSQL database, from time to time you will need to investigate if there are any queries running indefinitely on a PostgreSQL database. These long running queries may interfere with the overall database performance and likely get stuck on some background process. This blog post covers how you can set up alerting on query performance related metrics using Azure Functions and Azure Key Vault.

What is Query Store?

Query Store was a feature in Azure Database for PostgreSQL announced in early Fall 2018 that seamlessly enables tracking query performance over time. This simplifies performance troubleshooting by helping you quickly find the longest running and most resource-intensive queries. Learn how you can use Query Store on a wide variety of scenarios by visiting our documentation, “Usage scenarios for Query Store.” Query Store, when enabled, automatically captures a history of query runtime and wait statistics. It tracks this data over time so that you can see database usage patterns. Data for all users, databases, and queries is stored in a database named azure_sys in the Azure Database for PostgreSQL instance.

Query Store is not enabled on a server by default. However, it is very straightforward to opt-in on your server by following the simple steps detailed in our documentation, “Monitor performance with the Query Store.” After you have enabled Query Store to monitor your application performance, you can set alerts on various metrics such as long running queries, regressed queries, and more that you want to monitor.

How to set up alerting on Query Store metrics

You can achieve near real-time alerting on Query Store metrics monitoring using Azure Functions and Azure Key Vault. This GitHub repo provides you with an Azure Function and a PowerShell script to deploy a simple monitoring solution, which gives you some flexibility to change what and when to alert.

Alternatively, you can clone the repo to use this as a starting point and make code changes to better fit your scenario. The Visual Studio solution, when built with your changes, will automatically package the zip file you need to complete your deployment in the same fashion that is described here.

In this repo, the script DeployFunction creates an Azure function to serve as a monitor for Azure Database for PostgreSQL Query Store. Understanding the data collected by query performance insights will help you identify the metrics that you can alert on.

If you don't make any changes to the script or the function code itself and only provide the required parameters to DeployFunction script, here is what you will get:

  • A function app.
  • A function called PingMyDatabase that is time triggered every one minute.
  • An alert condition that looks for any query that has a mean execution time of longer than five seconds since the last time query store data is flushed to the disk.
  • An email when an alert condition is met with an attached list of all of the processes that was running on the instance, as well as the list of long running queries.
  • A key vault that contains two secrets named pgConnectionString and senderSecret that hold the connection string to your database and password to your sender email account respectively.
  • An identity for your function app with access to a Get policy on your secrets for this key vault.

You simply need to run DeployFunction on Windows PowerShell command prompt. It is important to run this script from Windows PowerShell. Using Windows PowerShell ISE will likely result in errors as some of the macros may not resolve as expected.

The script then creates the resource group and Key Vault deploys a monitoring function app, updates app configuration settings, and sets up the required Key Vault secrets. At any point during the deployment, you can view the logs available in the .logs folder.

After the deployment is complete, you can validate the secrets by going to the resource group in the Azure portal. As shown in the following diagram, two secrets keys are created, pgConnString and senderSecret. You can select the individual secrets if you want to update the value.

Screenshot of two secret keys being created


Depending on the condition set in the SENDMAILIF_QUERYRETURNSRESULTS app settings, you will receive an email alert when the condition is met.

How can I customize alert condition or supporting data in email?

After the default deployment goes through, using Azure portal you can update settings by selecting Platform features and then Application settings.

Screenshot of Platform features page

You can change the run interval, mail to, if condition, or supporting data to be attached by making changes to the below settings and saving them on your exit.

Alternatively, you can simply use az cli to update these settings like the following.

$cronIntervalSetting="CronTimerInterval=0 */1 * * * *"

az functionapp config appsettings set --resource-group yourResourceGroupName --name yourFunctionAppName --settings $cronIntervalSetting

Or

az functionapp config appsettings set --resource-group $resourceGroupName --name $functionAppName --settings "SENDMAILIF_QUERYRETURNSRESULTS=select * from query_store.qs_view where mean_time > 5000 and start_time >= now() - interval '15 minutes'"

Below are common cases on conditions that you can monitor and alert by either updating the function app settings after your deployment goes through or updating the corresponding value in DeployFunction.ps1 prior to your deployment:

Case

Function app setting name

Sample value

Query 3589441560 takes more than x milliseconds on average in the last fifteen minutes

SENDMAILIF_QUERYRETURNSRESULTS

select * from query_store.qs_view where query_id = 3589441560 and mean_time > x and start_time >= now() - interval '15 minutes'

Queries with cache hit less than 90 percent

SENDMAILIF_QUERYRETURNSRESULTS

select * , shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) AS as cache_hit from query_store.qs_view where shared_blks_hit / nullif(shared_blks_hit + shared_blks_read, 0) < 0.90

Queries with a mean execution time that is more than x milliseconds

SENDMAILIF_QUERYRETURNSRESULTS

select * from query_store.qs_view where mean_time > x and start_time >= now() - interval '15 minutes'

If an alert condition is met, check if there is an ongoing autovacuum operation, list the processes running and attach the results to email

LIST_OF_QUERIESWITHSUPPORTINGDATA

{“count_of_active_autovacuum”:” select count(*) from pg_stat_activity where position('autovacuum:' IN query) = 1 “,"list_of_processes_at_the_time_of_alert":"select now()-query_start as Running_Since,pid,client_hostname,client_addr, usename, state, left(query,60) as query_text from pg_stat_activity"}

How secure is this?

The script provides you with the mechanism to store your secrets in a Key Vault. Your secrets are secured as they are encrypted in-transit and at rest. However, the function app accesses the Key Vault over the network. If you want to avoid this and access your secrets over your virtual network (VNet) through the backbone, you will need to configure a VNet for both your function app and your Key Vault. Note, that VNet support of function apps is in preview and is currently available in selected Azure regions. When the proper deployment scenarios are supported, we may revisit this script to accommodate the changes. Until then, you will need to configure a VNet manually to accomplish the setup below.

Flowchart display of manually configruing a VNet

We are always looking to hear feedback from you. If you have any feedback for the Query Store on PostgreSQL, or monitoring and alerting on query performance, please don’t hesitate to contact the Azure Database for PostgreSQL team.

Acknowledgments

Special thanks to Korhan Ileri, Senior Data Scientist, for developing the script and contributing to this post. As well as Tosin Adewale, Software Engineer from the Azure CLI team for closely partnering with us.

Microsoft Teams wins Enterprise Connect Best in Show award and delivers new experiences for the intelligent workplace

$
0
0

This week marks the second anniversary of the worldwide launch of Microsoft Teams, and the second year in a row winning the Enterprise Connect Best in Show award. Today, we are celebrating how our customers are using Teams as well as announcing eight new capabilities that make collaboration more inclusive, effective, and secure.

The post Microsoft Teams wins Enterprise Connect Best in Show award and delivers new experiences for the intelligent workplace appeared first on Microsoft 365 Blog.

Bing delivers text-to-speech and greater coverage of intelligent answers and visual search

Visual Studio Subscriptions – everything you need for Azure development

$
0
0

Recently, our product team has been talking with Visual Studio subscribers to learn more about how they approach cloud development. Many of the subscribers we spoke with mentioned that they were unaware of the benefits included with a Visual Studio subscription, that are intended to make learning new technologies and prototyping easy.

If you’re interested in cloud development, or simply want to learn more about new development tools, techniques, and frameworks, your subscription includes a wide range of benefits you can use. The level of these benefits you have depends on your subscription type. Check out this benefits video or read on below for an overview.

Cloud services

Subscribers have access to unlimited Azure DevOps accounts and access to features on any Azure DevOps organization to share code, track work, and ship software. You can use Azure Pipelines to run Continuous Integration and Continuous Delivery jobs and automate the compilation, testing and deployment of applications, for all languages, platforms and cloud services. You also get access to Azure Boards, which lets you deliver software faster thanks to proven agile tools for planning, tracking and discussing work items across teams.

Your subscription has a $50-$150 monthly Azure credit, which is ideal for experimenting with and learning about Azure services—your own personal sandbox for dev/test. When you activate this benefit, a separate Azure subscription is created with a monthly credit balance that renews each month while you remain an active Visual Studio subscriber. If the credits run out before the end of the month, the Azure services are suspended until more credits are available. No surprises, no cost, no credit card required. If you wonder what you can buy with a $50 credit, check out this blog post for some ideas.

If you’d like to collaborate with your team in the cloud, the Azure Dev/Test offer enables you to quickly get up and running with dev/test environments in the cloud using exclusive pre-configured virtual machines and up to a 50% discount on a range of services. You have the flexibility to create multiple Azure subscriptions based on this offer, enabling you to maintain isolated environments and a separate bill for different projects or teams.

Visual Studio Enterprise subscriptions include Enterprise Mobility + Security to help you secure and manage identities, devices, apps and data.

Developer tools

Subscribers have continued access to the latest versions of Visual Studio IDE on Windows & Mac.

Cloud migration tools such as CAST Highlight by CAST (Enterprise only) and CloudPilot by UnifyCloud were recently added as new benefits to help you get a head start on your app modernization journey and migration to the cloud.

Training and support

Take your skills to the next level with LinkedIn Learning and Pluralsight courses included in your subscriber benefits.

Your subscription also provides access to technical experts, Azure Advisory Chat, and Azure Community to help you solve issues and answer questions. Just submit a technical support ticket, questions via chat, or start community discussions.

You can find all these benefits by logging into the subscriber portal at https://my.visualstudio.com. Contact your admin for access to the portal if you do not have a currently assigned subscription. For more information on how to use your benefits, check out our docs.

We would love to hear your feedback, suggestions, thoughts, and ideas in the comments below.

The post Visual Studio Subscriptions – everything you need for Azure development appeared first on The Visual Studio Blog.

March 2019 changes to Azure Monitor Availability Testing

$
0
0

Azure Monitor Availability Testing allows you to monitor the availability and responsiveness of any HTTP or HTTPS endpoint that is accessible from the public internet. You don't have to add anything to the web site you're testing. It doesn't even have to be your site, you could test a REST API service you depend on. This service sends web requests to your application at regular intervals from points around the world. It alerts you if your application doesn't respond, or if it responds slowly.

At the end of this month we are deploying some major changes to this service, these changes will improve performance and reliability, as well as allow us to make more improvements to the service in the future. This post will highlight some of the changes, as well as describe some of the changes you should be aware of to ensure that your tests continue running without any interruption.

Reliability improvements

We are deploying a new version of the availability testing service. This new version should improve the reliability of the service, resulting in fewer false alarms. This change also increases the capacity for the creation of new availability tests, which is greatly needed as Application Insights usage continues to grow. Additionally, the architecture of this new design enables us to add new regions much more easily. Expect to see additional regions from which you can test your app’s availability in the future!

New UI

Along with the new backend architecture, we are updating the availability testing UI with a brand new design. See the image below for a sneak peek of the UI that we will be rolling out for all customers in the next few weeks. 

Screenshot of the new design for availability testing UI

The new design is more consistent with other experiences in Application Insights. It reduces the number of clicks needed to see highly requested information, and surfaces insights about your availability tests to the right side of the availability scatter plot. The new chart supports time brushing, you can click and drag over a section of the chart to zoom into just that time period. Additionally, this design loads faster than the previous one!

IP address changes

If you have whitelisted certain IP addresses because you are running web tests on your app, but your web server is restricted to serving specific clients, then you should be aware that we are deploying our service on new IP ranges. We are increasing the capacity of our service, and this requires adding additional test agents.

Effective March 20, 2019, we will begin running tests from our new test agents, and this will require you to update your whitelist. The list containing all of the necessary whitelisted IPs, including our previous IP ranges and the new IP ranges is published in our documentation, “IP addresses used by Application Insights and Log Analytics.”

France South changes

France South will no longer be offered as a region from which you can perform availability tests. All existing tests in France South will be moved to a duplicate service running in France Central which will appear in the portal as “France Central (formerly France South).”  If you already have a test running in France Central, this means that your test will run from France Central twice per time period. Your existing alert rules will not be affected.

New testing region

We will be adding an additional region within Europe from which to run availability tests. An announcement will be made when this region is available.

Next steps

Log into your Azure account today to get started with the new Application Insights Availability UX. You can also learn more about how to get started by visiting our “Azure Monitor Documentation.”

.NET Framework March 2019 Update

$
0
0

Today, we released the March 2019 Update.

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Addressed an issue where the Framework would throw an exception is the year in the parsed date is greater or equal the year start the next era. Now, the framework will not throw such exception. [603100]
  • Updated Japanese Era dates that are formatted for the first year in an era and for which the format pattern uses “y年”. The format of the year together with the symbol “元” is supported instead of using year number 1. Also, formatting day numbers that include “元” is supported. [646179]
  • Removed dependency of single quote to output the Gannen character in Japanese Era formatting. [777182]
  • Addressed registry key setting HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlNlsCalendarsJapaneseEras to support Japanese Era. [568291]

Getting the Update

The Update is available via Windows Server Update Services and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10 update 1607, Windows 10 update 1703, Windows 10 update 1709 and Windows update 1803, the .NET Framework updates are part of the Windows 10 Monthly Rollup.  The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version Update KB
Windows 10 1803 (April 2018 Update)
.NET Framework 3.5, 4.7.2
Catalog
4489894
Windows 10 1709 (Fall Creators Update)
.NET Framework 3.5, 4.7.1, 4.7.2
Catalog
4489890
Windows 10 1703 (Creators Update)
.NET Framework 3.5, 4.7, 4.7.1, 4.7.2
Catalog
4489888
Windows 10 1607 (April 2018 Update)
Windows Server 2016

.NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2
Catalog
4489889

 

The following table is for earlier Windows and Windows Server versions.

Product Version Update KB
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2
Catalog
4489488
.NET Framework 3.5 Catalog
4488663
.NET Framework 4.5.2 Catalog
4488667
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1 Catalog
4488665
Windows Server 2012 Catalog
4489487
.NET Framework 3.5 Catalog
4488660
.NET Framework 4.5.2 Catalog
4488668
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4488664
Windows 7 SP1
Windows Server 2008 R2 SP1
Catalog
4489486
.NET Framework 3.5 Catalog
4488662
.NET Framework 4.5.2 Catalog
4488669
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4488666
Windows Server 2008 SP2 Catalog
4489489
.NET Framework 3.5 Catalog
4488661
.NET Framework 4.5.2 Catalog
4488669
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4488666

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

The post .NET Framework March 2019 Update appeared first on .NET Blog.


Microsoft Azure for the Gaming Industry

$
0
0

This blog post was co-authored by Patrick Mendenall, Principal Program Manager, Azure. 

We are excited to join the Game Developers Conference (GDC) this week to learn what’s new and share our work in Azure focused on enabling modern, global games via cloud and cloud-native technologies.

Cloud computing is increasingly important for today’s global gaming ecosystem, empowering developers of any size to reach gamers in any part of the world. Azure’s 54 datacenter regions, and its robust global network, provides globally available, high performance services, as well as a platform that is secure, reliable, and scalable to meet current and emerging infrastructure needs. For example, earlier this month we announced the availability of Azure South Africa regions. Azure services enable every phase of the game development lifecycle from designing, building, testing, publishing, monetizing, measurement, engagement, and growth, providing:

  • Compute: Gaming services rely on a robust, reliable, and scalable compute platform. Azure customers can choose from a range of compute- and memory-optimized Linux and Windows VMs to run their workloads, services, and servers, including auto-scaling, microservices, and functions for modern, cloud-native games.
  • Data: The cloud is changing the way applications are designed, including how data is processed and stored. Azure provides high availability, global data, and analytics solutions based on both relational databases as well as big data solutions.
  • Networking: Azure operates one of the largest dedicated long-haul network infrastructures worldwide, with over 70,000 miles of fiber and sub-sea cable, and over 130+ edge sites. Azure offers customizable networking options to allow for fast, scalable, and secure network connectivity between customer premises and global Azure regions.
  • Scalability: Azure offers nearly unlimited scalability. Given the cyclical usage patterns of many games, using Azure enables organizations to rapidly increase and/or decrease the number of cores needed, while only having to pay for the resources that are used.
  • Security: Azure offers a wide array of security tools and capabilities, to enable customers to secure their platform, maintain privacy and controls, meet compliance requirements (including GDPR), and ensure transparency.
  • Global presence: Azure has more regions globally than any other cloud provider, offering the scale needed to bring games and data closer to users around the world, preserving data residency, and providing comprehensive compliance and resiliency options for customers. Using Azure’s footprint, the cost, the time, and the complexity of operating a game at global scale can be reduced.
  • Open: with Azure you can use the software you choose whether it be operating systems, engines, database solutions, or open source – run it on Azure.

We’re also excited to bring PlayFab into the Azure family. Together, Azure and PlayFab are a powerful combination for game developers. Azure brings reliability, global scale, and enterprise-level security, while PlayFab provides Game Stack with managed game services, real-time analytics, and comprehensive LiveOps capabilities.

We look forward to meeting many of you at GDC 2019 to learn about your ideas in gaming, discussing where cloud and cloud-native technologies can enable your vision, and sharing more details on Azure for gaming. Join us at the conference or contact our gaming industry team at azuregaming@microsoft.com.

Details on all of these are available via links below.

Microsoft and NVIDIA bring GPU-accelerated machine learning to more developers

$
0
0

With ever-increasing data volume and latency requirements, GPUs have become an indispensable tool for doing machine learning (ML) at scale. This week, we are excited to announce two integrations that Microsoft and NVIDIA have built together to unlock industry-leading GPU acceleration for more developers and data scientists.

  • Azure Machine Learning service is the first major cloud ML service to integrate RAPIDS, an open source software library from NVIDIA that allows traditional machine learning practitioners to easily accelerate their pipelines with NVIDIA GPUs
  • ONNX Runtime has integrated the NVIDIA TensorRT acceleration library, enabling deep learning practitioners to achieve lightning-fast inferencing regardless of their choice of framework.

These integrations build on an already-rich infusion of NVIDIA GPU technology on Azure to speed up the entire ML pipeline.

“NVIDIA and Microsoft are committed to accelerating the end-to-end data science pipeline for developers and data scientists regardless of their choice of framework,” says Kari Briski, Senior Director of Product Management for Accelerated Computing Software at NVIDIA. “By integrating NVIDIA TensorRT with ONNX Runtime and RAPIDS with Azure Machine Learning service, we’ve made it easier for machine learning practitioners to leverage NVIDIA GPUs across their data science workflows.”

Azure Machine Learning service integration with NVIDIA RAPIDS

Azure Machine Learning service is the first major cloud ML service to integrate RAPIDS, providing up to 20x speedup for traditional machine learning pipelines. RAPIDS is a suite of libraries built on NVIDIA CUDA for doing GPU-accelerated machine learning, enabling faster data preparation and model training. RAPIDS dramatically accelerates common data science tasks by leveraging the power of NVIDIA GPUs.

Exposed on Azure Machine Learning service as a simple Jupyter Notebook, RAPIDS uses NVIDIA CUDA for high-performance GPU execution, exposing GPU parallelism and high memory bandwidth through a user-friendly Python interface. It includes a dataframe library called cuDF which will be familiar to Pandas users, as well as an ML library called cuML that provides GPU versions of all machine learning algorithms available in Scikit-learn. And with DASK, RAPIDS can take advantage of multi-node, multi-GPU configurations on Azure.

Learn more about RAPIDS on Azure Machine Learning service or attend the RAPIDS on Azure session at NVIDIA GTC.

ONNX Runtime integration with NVIDIA TensorRT in preview

We are excited to open source the preview of the NVIDIA TensorRT execution provider in ONNX Runtime. With this release, we are taking another step towards open and interoperable AI by enabling developers to easily leverage industry-leading GPU acceleration regardless of their choice of framework. Developers can now tap into the power of TensorRT through ONNX Runtime to accelerate inferencing of ONNX models, which can be exported or converted from PyTorch, TensorFlow, MXNet and many other popular frameworks. Today, ONNX Runtime powers core scenarios that serve billions of users in Bing, Office, and more.

With the TensorRT execution provider, ONNX Runtime delivers better inferencing performance on the same hardware compared to generic GPU acceleration. We have seen up to 2X improved performance using the TensorRT execution provider on internal workloads from Bing MultiMedia services.

To learn more, check out our in-depth blog on the ONNX Runtime and TensorRT integration or attend the ONNX session at NVIDIA GTC.

Accelerating machine learning for all

Our collaboration with NVIDIA marks another milestone in our venture to help developers and data scientists deliver innovation faster. We are committed to accelerating the productivity of all machine learning practitioners regardless of their choice of framework, tool, and application. We hope these new integrations make it easier to drive AI innovation and strongly encourage the community to try it out. Looking forward to your feedback!

The Value of IoT-Enabled Intelligent Manufacturing

$
0
0

As the manufacturing industry tackles some significant challenges including an aging workforce, compliance issues, and declining revenue, the Internet of Things (IoT) is helping reinvent factories and key processes. At the heart of this transformation journey is the design and use of IoT-enabled machines that help lead to reduced downtime, increased productivity, and optimized equipment performance.

Learn how you can apply insights from real-world use cases of IoT-enabled intelligent manufacturing when you attend the Manufacturing IoT webinar on March 28th. For additional hands-on, actionable insights around intelligent edge and intelligent cloud IoT solutions, join us on April 19th for the Houston Solution Builder Conference.

IoT in action webinar series: From Reaction to Prediction - IoT in Manufacturing March 28th, 2019.

Using IoT solutions to move from a reactive to predictive model

In the past, factory managers often had no way of knowing when a machine might begin to perform poorly or completely shut down. When something went wrong, getting the equipment back up and running was often time consuming and based on trial-and-error troubleshooting. And for the company, any unplanned downtime meant slowed or halted production, resulting in lower productivity and higher costs.

The development of IoT-enabled machines with sensors allows companies to improve overall efficiency, performance, and profitability. Rockwell Automation found it time consuming and challenging to monitor its equipment in remote locations. Using Microsoft Azure to connect them, Rockwell Automation now sees real-time performance information and can proactively maintain equipment before an incident occurs.

Kontron S&T, a Microsoft partner, also recently developed the SUSiEtec platform, an end-to-end IoT solution that enables companies to build scalable edge computing solutions using Microsoft Azure IoT Edge integration and customization services. With SUSiEtec, companies can dynamically decide where data analysis will take place and manage distributed IoT devices regardless of where they’re located or how many devices are used. Join the Manufacturing IoT webinar to learn more about SUSiEtec and how to develop secure, manageable IoT solutions for manufacturing.

Keeping IoT data secure with Azure Sphere

Using IoT to create the factory of the future also means additional access points into the factory network and systems, so creating a secure network is top priority. Factory managers typically access IoT data using mobile devices, which creates even more access points. For a true connected IoT experience and factory, security is foundational.

Azure Sphere provides a foundation of security and connectivity that starts in the silicon and extends to the cloud. Together, Azure Sphere microcontrollers (MCUs), secured OS, and turnkey cloud security service guard every Azure Sphere device accessing IoT data, IoT sensors, and IoT-enabled machines. By adding useful software to Edge hardware, factories are protected with IT-proven standards as well as new Operational Technology (OT) network security.

Getting ready to develop IoT solutions

Moving to a factory of the future starts with determining what you want to achieve through the IoT-enabled machine. If predictive maintenance is the end goal, start by conducting an inventory of data sources. Identify all potential sources and types of relevant data to determine what is most essential. Then you’ll need to lay the groundwork for a robust predictive model by pulling in data that includes both expected behavior and failure logs.

With the initial logistics determined, the next step is to create a model and test and iterate to figure out which model is best at forecasting the timing of unit failures. By moving to a live operational setting, you can apply the model to live, streaming data to observe how it works in real-world conditions. After adjusting your maintenance processes, systems, and resources to act on the new insights, the final step is to integrate the model with Azure IoT Central into operations.

Of course, not all companies have the skillset or resources to develop an IoT solution from scratch. To accelerate the design, development, and implementation process, partners can utilize the Microsoft Accelerator program. By using open-source code or leveraging proven architectures, companies can create a fully customizable solution and quickly connect devices to existing systems in minutes. For instance, the Predictive Maintenance solution accelerator combines key Azure IoT services like IoT Hub and Stream analytics to proactively optimize maintenance and create automatic alerts and actions for remote diagnostics, maintenance requests, and other workflows.

Digitally transforming your own business and building or deploying IoT solutions that are highly scalable and economical to manage takes partnerships. Join Microsoft and Kontron S&T on March 28th for the webinar, Go from Reaction to Prediction – IoT in Manufacturing, and discover new approaches for achieving your business goals.

Breaking the wall between data scientists and app developers with Azure DevOps

$
0
0

As data scientists, we are used to developing and training machine learning models in our favorite Python notebook or an integrated development environment (IDE), like Visual Studio Code (VSCode). Then, we hand off the resultant model to an app developer who integrates it into the larger application and deploys it. Often times, any bugs or performance issues go undiscovered until the application has already been deployed. The resulting friction between app developers and data scientists to identify and fix the root cause can be a slow, frustrating, and expensive process.

As AI is infused into more business-critical applications, it is increasingly clear that we need to collaborate closely with our app developer colleagues to build and deploy AI-powered applications more efficiently. As data scientists, we are focused on the data science lifecycle, namely data ingestion and preparation, model development, and deployment. We are also interested in periodically retraining and redeploying the model to adjust for freshly labeled data, data drift, user feedback, or changes in model inputs.

The app developer is focused on the application lifecycle – building, maintaining, and continuously updating the larger business application that the model is part of. Both parties are motivated to make the business application and model work well together to meet end-to-end performance, quality, and reliability goals.

What is needed is a way to bridge the data science and application lifecycles more effectively. This is where Azure Machine Learning and Azure DevOps come in. Together, these platform features enable data scientists and app developers to collaborate more efficiently while continuing to use the tools and languages we are already familiar and comfortable with.

The data science lifecycle or “inner loop” for (re)training your model, including data ingestion, preparation, and machine learning experimentation, can be automated with the Azure Machine Learning pipeline. Likewise, the application lifecycle or “outer loop”, including unit and integration testing of the model and the larger business application, can also be automated with the Azure DevOps pipeline. In short, the data science process is now part of the enterprise application’s Continuous Integration (CI) and Continuous Delivery (CD) pipeline. No more finger pointing when there are unexpected delays in deploying apps, or when bugs are discovered after the app has been deployed in production. 

Azure DevOps: Integrating the data science and app development cycles

Let’s walk through the diagram below to understand how this integration between the data science cycle and the app development cycle is achieved.

Diagram displaying the integration between the data science cycle and the app development cycle.

A starting assumption is that both the data scientists and app developers in your enterprise use Git as your code repository. As a data scientist, any changes you make to training code will trigger the Azure DevOps CI/CD pipeline to orchestrate and execute multiple steps including unit tests, training, integration tests, and a code deployment push. Likewise, any changes the app developer or you make to application or inferencing code will trigger integration tests followed by a code deployment push. You can also set specific triggers on your data lake to execute both model retraining and code deployment steps. Your model is also registered in the model store, which lets you look up the exact experiment run that generated the deployed model.

With this approach, you as the data scientist retain full control over model training. You can continue to write and train models in your favorite Python environment. You get to decide when to execute a new ETL / ELT run to refresh the data to retrain your model. Likewise, you continue to own the Azure Machine Learning pipeline definition including the specifics for each of its data wrangling, feature extraction, and experimentation steps, such as compute target, framework, and algorithm. At the same time, your app developer counterpart can sleep comfortably knowing that any changes you commit will pass through the required unit, integration testing, and human approval steps for the overall application.

With the soon to be released Data Prep Services (box in bottom left of above diagram), you will also be able to set thresholds for data drift and automate the retraining of your models! 

In subsequent blog posts, we will cover in detail more topics related to CI/CD, including the following:

  1. Best practices to manage compute costs with Azure DevOps for Machine Learning
  2. Managing model drift with Azure Machine Learning Data Prep Services
  3. Best practices for controlled rollout and A/B testing of deployed models

Learn more

Azure Stack IaaS – part five

$
0
0

Self-service is core to Infrastructure-as-a-Service (IaaS). Back in the virtualization days, you had to wait for someone to create a VLAN for you, carve out a LUN, and find space on a host. If Microsoft Azure ran that way, we would have needed to hire more and more admins as our cloud business grew.

Do it yourself

A different approach was required, which is why IaaS is important. Azure's IaaS gives the owner of the subscription everything they need to create virtual machines (VMs) and other resources on their own, without involving an administrator. To learn more visit our documentation, “Introduction to Azure Virtual Machines” and “Introduction to Azure Stack virtual machines.”

Let me give you a few examples that show Azure and Azure Stack self-service management of VMs.

Deployment

Creating a VM is as simple as going through a wizard. You can create the VM by specifying everything needed for the VM in the “Create virtual machine” blade. You can include the operating system image or marketplace template, the size (memory, CPUs, number of disks, and NICs), high availability, storage, networking, monitoring, and even in guest configuration.

Screenshot of the Create virtual machine blade

Learn more by visiting the following resources:

Daily operations

That’s great for deployment, but what about later down the road when you need to quickly change the VM? Azure and Azure Stack have you covered there too. The settings section of the VM allows you to make changes to networking, disks, size CPUs, memory, and more, in-guest configuration extensions and high availability.

Screenshot of the settings section of a virtual machine

One thing that was always a pain in the virtualization days was getting the right firewall ports open. Now you can manage this on your own without waiting on the networking team. In Azure and Azure Stack firewall rules are called network security groups. This can all be configured in a self-service manner as shown below.

Screenshot of configuration in a self-service manner

Learn more about managing Azure VMs firewall ports by visiting our documentation, “How to open ports to a virtual machine with the Azure portal.”

Disks and image self-service is important too. In the virtualization days this was also a big pain point. I had to give these to my admin to get them into the system for usage. Fortunately, storage is self-service in Azure and Azure Stack. Your IaaS subscription includes access to both storage accounts and managed disks from where you can upload and download your disks and images.

Screenshot of uploading and downloading screen for disks and images

You can learn more by visiting our documentation, “Upload a generalized VHD an use it to create new VMs in Azure” and “Download a Linux VHD from Azure.”

Managed disks also give you the option to create and export snapshots.

Screenshot of Managed disks allowing the option to create and export snapshots

Find more information by visiting the following resources:

Other resources a VM owner can manage include load balancer configuration, DNS, VPN gateways, subnets, attach/detach disks, scale up/down, scale in/out, and so many other things it is astounding.

Support and troubleshooting

When there is a problem, no one wants to wait for someone else to help. The more tools you have to correct the situation the better. While operating one of the largest public clouds, the Azure IaaS team has learned what the top issues are facing customers and their support needs. To empower VM owners to solve these issues themselves, they have created a number of self-service support and troubleshooting features. Perhaps the most widely used is the Reset Password feature. Why wasn’t this feature around in the virtualization days?

Screenshot display of the Reset Password feature

Learn more by visiting our documentation, for re-setting access on an Azure Windows VM and re-setting access on an Azure Linux VM.

I need to mention a setting that has prevented me from creating a support problem because of my absentmindedness. It is the Lock feature. A lock can prevent any change or deletion on a VM or any other resource.

Screenshot of the Lock feature neing added

Learn more about locking VMs and other Azure resources by visiting our documentation, “Locking resources tp prevent unexpected changes.”

Other useful troubleshooting and support features include re-deploying your VM to another host if you suspect your VM is having problems on the host it is currently on, checking boot diagnostics to see the state of the VM before it fully boots and is ready for connections, and reviewing performance diagnostics. As we learn and build these features in Azure, they eventually find their way to Azure Stack so that your admins don’t have to work so hard to support you.

Learn more by visit our documentation, “Troubleshooting Azure Virtual Machines.”

Happy infrastructure admins

When you can take care of yourself, your admins can manage the underlying infrastructure without being interrupted by you. This means they can work on the things important to them and you can focus on what is important to you.

In this blog series

We hope you come back to read future posts in this series. Here are some of our planned upcoming topics:

Microsoft’s Azure Cosmos DB is named a leader in the Forrester Wave: Big Data NoSQL

$
0
0

We’re excited to announce that Forrester has named Microsoft as a Leader in The Forrester Wave™: Big Data NoSQL, Q1 2019 based on their evaluation of Azure Cosmos DB. We believe Forrester’s findings validate the exceptional market momentum of Azure Cosmos DB and how happy our customers are with the product.

NoSQL platforms are on the rise

According to Forrester, “half of global data and analytics technology decision makers have either implemented or are implementing NoSQL platforms, taking advantage of the benefits of a flexible database that serves a broad range of use cases…While many organizations are complementing their relational databases with NoSQL, some have started to replace them to support improved performance, scale, and lower their database costs.”

Azure Cosmos DB has market momentum

Azure Cosmos DB is Microsoft's globally distributed, multi-model database service for mission-critical workloads. Azure Cosmos DB provides turnkey global distribution with unlimited endpoint scalability, elastic scaling of throughput (at multiple granularities, e.g., database, key-space, tables and collections) and storage worldwide, single-digit millisecond latencies at the 99th percentile, five well-defined consistency models, and guaranteed high availability, all backed by the industry-leading comprehensive SLAs. Azure Cosmos DB automatically indexes all data without requiring developers to deal with schema or index management. It is a multi-model service, which natively supports document, key-value, graph, and column-family data models. As a natively born in the cloud service, Azure Cosmos DB is carefully engineered with multitenancy and global distribution from the ground up. As a foundational service in Azure, Azure Cosmos DB is ubiquitous, running in all public regions, DoD and sovereign clouds, with industry-leading compliance certification list, enterprise grade security – all without any extra cost.

Azure Cosmos DB’s unique approach of providing wire protocol compatible APIs for the popular open source-databases ensures that you can continue to use Azure Cosmos DB in a cloud-agnostic manner while still leveraging a robust database platform natively designed for the cloud. You get the flexibility to run your Cassandra, Gremlin, MongoDB apps fully managed with no vendor lock-in. While Azure Cosmos DB exposes APIs for the popular open source databases, it does not rely on the implementations of those databases for realizing the semantics of the corresponding APIs.

image

According to the Forrester report, Azure Cosmos DB is starting to achieve strong traction and “Its simplified database with relaxed consistency levels and low-latency access makes it easier to develop globally distributed apps.” Forrester mentioned specifically that “Customer references like its resilience, low maintenance, cost effectiveness, high scalability, multi-model support, and faster time-to-value.”

Forrester notes Azure Cosmos DB’s global availability across all Azure regions and how customers use it for operational apps, real-time analytics, streaming analytics and Internet-of-Things (IoT) analytics. Azure Cosmos DB powers many worldwide enterprises and Microsoft services such as XBox, Skype, Teams, Azure, Office 365, and LinkedIn.

To fulfill their vision, in addition to operational data processing, organizations using Azure Cosmos DB increasingly invest in artificial intelligence (AI) and machine learning (ML) running on top of globally-distributed data in Azure Cosmos DB. Azure Cosmos DB enables customers to seamlessly build, deploy, and operate low latency machine learning solutions on the planet scale data. The deep integration with Spark and Azure Cosmos DB enables the end-to-end ML workflow - managing, training and inferencing of machine learning models on top of multi-model globally-distributed data for time series forecasting, deep learning, predictive analytics, fraud detection and many other use-cases.

Azure Cosmos DB’s commitment

We are committed to making Azure Cosmos DB the best globally distributed database for all businesses and modern applications. With Azure Cosmos DB, we believe that you will be able to write amazingly powerful, intelligent, modern apps and transform the world.

If you are using our service, please feel free to reach out to us at AskCosmosDB@microsoft.com any time. If you are not yet using Azure Cosmos DB, you can try Azure Cosmos DB for free today, no sign up or credit card is required. If you need any help or have questions or feedback, please reach out to us any time. For the latest Azure Cosmos DB news and features, please stay up-to-date by following us on Twitter #CosmosDB, @AzureCosmosDB. We look forward to see what you will build with Azure Cosmos DB!

Download the full Forrester report and learn more about Azure Cosmos DB.

Bing delivers text-to-speech and greater coverage of intelligent answers and visual search

$
0
0
At NVIDIA’s GPU Technology Conference this week, Bing demonstrated natural sounding text-to-speech AI, expanded intelligent answers, and the ability to quickly see multiple objects auto-detected within an image to search for visual matches. All these features help you find what you’re looking for faster and are powered via Azure virtual machines running on NVIDIA GPUs optimized with NVIDIA CUDA-X AI software libraries.
 

Text-to-speech

The updated Bing app can now change text to speech, meaning Bing can speak answers to your queries back to you with a voice that’s nearly indistinguishable from a human’s. This advance was made possible by breakthroughs in deep neural networks that give our AI human-like intonation and clear articulation of words. In addition to improvement to our conversational AI, this capability as a real-time service would not be possible without the higher processing power of NVIDIA GPUs.
 

The Bing app also supports speech as an input, meaning you can speak to your mobile device and Bing will change your spoken word to text and search that query for you. Simply press the microphone button on the app homepage, speak your question, and you’ll get search results.
             
voice-to-text.JPG
 

Intelligent answers

Bing intelligent answers allow you to get comprehensive, summarized information aggregated across several sources in response to certain queries.

We’re now taking intelligent answers one step further by advancing our deep learning models. These models require a lot of processing power, so we’re leveraging recent advances in GPU technology that allow us to process entire web pages much faster and more efficiently than traditional models powered by CPUs. This advance allows us to provide answers for harder questions than ever before. For example, instead of the relatively simple answer to ‘what is the capital of Bangladesh’, Bing can now provide answers to more complex questions, such as ‘what are different types of lighting for a living room’, quicker than before.
           
intelligent-answers.JPG
 

Visual search

Visual search is another area in which recent developments have enabled huge strides in efficiency and coverage.

Visual search allows you to search using an image. For example, if you see an image of an accent light you like, Bing can show visually-similar decor and even show purchase options at different price points if the item is available online. To save you time, visual search also automatically detects and places clickable hotspots over important objects you may want to search for next.
 
           
Our advanced visual search capabilities such as object detection are quick and automatic using NVIDIA GPUs for inferencing, which have yielded dramatic processing efficiency compared to CPU-powered inference, hence unlocking this scenario for our customers.
 

New intelligent scenarios powered by Azure and NVIDIA

These new scenarios are all made possible by Bing powered by Azure N-series virtual machines running NVIDIA GPUs. Text-to-speech, speech-to-text, instant answers, and visual search are all part of the next great search frontier, and we’re very excited to see what our partnership continues to enable in the future.
 

Using Newtonsoft.Json in a Visual Studio extension

$
0
0

The ever popular Newtonsoft.Json NuGet package by James Newton-King is used throughout Visual Studio’s code base. Visual Studio 2015 (14.0) was the first version to ship with it. Later updates to Visual Studio also updated its Newtonsoft.Json version when an internal feature needed it. Today it is an integral part of Visual Studio and you can consider it a part of the SDK alongside other Visual Studio assemblies.

Extensions can therefore also use the very same Newtonsoft.Json shipped with Visual Studio. It can, however, be confusing to know what version to reference and whether to ship the Newtonsoft.Json.dll file itself with the extension or not. And what if the extension supports older version of Visual Studio that doesn’t come with Newtonsoft.Json?

I promise it’s not confusing once you know how, so let’s start at the beginning with versioning.

Versioning

Just like any other Visual Studio SDK assemblies, extensions must reference lowest version matching the lower bound of supported Visual Studio versions. For instance, if the extension supports Visual Studio 14.0, 15.0, and 16.0, then it must reference the 14.0 SDK assemblies. The same is true for referencing Newtonsoft.Json, but it is less obvious to know what version shipped when.

Here’s the breakdown:

  • Visual Studio 16.0 – Newtonsoft.Json 9.0.1
  • Visual Studio 15.3 – Newtonsoft.Json 9.0.1
  • Visual Studio 15.0 – Newtonsoft.Json 8.0.3
  • Visual Studio 14.0 – Newtonsoft.Json 6.0.x
  • Visual Studio 12.0 – none

So, if your extension’s lowest supported Visual Studio version is 14.0, then you must reference Newtonsoft.Json version 6.0.x. In fact, make sure the entire dependency tree of your references doesn’t exceed that version.

Learn more about Visual Studio versioning in the blog post Visual Studio extensions and version ranges demystified.

Binding redirects

When referencing a lower version of Newtonsoft.Json than ships in Visual Studio, a binding redirect is in place to automatically change the reference to the later version at runtime. Here’s what that looks like in the devenv.exe.config file of Visual Studio 15.0:

<dependentAssembly>
  <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral"/>
  <bindingRedirect oldVersion="4.5.0.0-8.0.0.0" newVersion="8.0.0.0"/>
</dependentAssembly>

It makes sure that when an assembly references a version of Newtonsoft.Json that is older than 8.0.0.0, it automatically redirects to use the 8.0.0.0 version that ships in Visual Studio 15.0.

This is the same mechanism that makes it possible to use an SDK assembly such as Microsoft.VisualStudio.Language.Intellisense version 12.0 in Visual Studio 16.0. A binding redirect automatically changes the reference to the 16.0 version of that assembly.

Don’t ship it unless you need to

The rule of thumb is to not ship the Newtonsoft.Json.dll file in the .vsix container. Since Visual Studio always have a copy and does binding redirects, there is no reason to ship it.

However, there are two scenarios where you do want to ship the .dll with the extension.

  1. If your extension supports Visual Studio 12.0 or older
  2. If you absolutely need a newer version than shipped by Visual Studio

When supporting Visual Studio 12.0 or older, try to use Newtonsoft.Json version 6.0.x if possible. That ensures that when the extension runs in Visual Studio 14.0 and newer, then the .NET Framework won’t load the assembly from your extension, but instead use the one it ships with. That means fewer assemblies needed loading by the CLR.

If you ship your own version, then don’t expect to be able to exchange Newtonsoft.Json types with other assemblies in Visual Studio because they were compiled against a different version. Normally binding redirects unifies the versions, but not when shipping your own. Also specify a code base for it so Visual Studio can resolve it at runtime. You don’t always need to, but it’s considered best practice and avoids any issues. Simply add this line to your AssemblyInfo.cs file:

[assembly: ProvideCodeBase(AssemblyName = "Newtonsoft.Json")]

It’s very important that you never add your own binding redirect for Newtonsoft.Json.dll either. Doing so will force all assemblies in the Visual Studio process to redirect to the version you ship. This might lead to unpredictable issues that could end up breaking other extensions and internal components.

Follow the simple rules

So, the simple rules to apply when using Newtonsoft.Json are:

  1. Reference the lowest version of Newtonsoft.Json (but no lower than 6.0.x)
  2. Don’t ship Newtonsoft.Json.dll in the extension
    1. Except if you target Visual Studio 12.0 or older
    2. Except if you absolutely need a newer version than ships in Visual Studio
    3. If you do, specify a code base for it
  3. Don’t ever add binding redirects for Newtonsoft.Json.dll

I wrote this post based on feedback and questions about how to correctly reference Newtonsoft.Json from an extension. I hope it helped clarify it. If not, please let me know in the comments.

The post Using Newtonsoft.Json in a Visual Studio extension appeared first on The Visual Studio Blog.

Windows 10 SDK Preview Build 18356 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18356 or greater). The Preview SDK Build 18356 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1809 or earlier to the Microsoft Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
  • This build of the Windows SDK will install on Windows 10 Insider Preview builds and supported Windows operating systems.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following URL: https://go.microsoft.com/fwlink/?prd=11966&pver=1.0&plcid=0x409&clcid=0x409&ar=Flight&sar=Sdsurl&o1=18356 once the static URL is published.

Tools Updates

Message Compiler (mc.exe)

  • The “-mof” switch (to generate XP-compatible ETW helpers) is considered to be deprecated and will be removed in a future version of mc.exe. Removing this switch will cause the generated ETW helpers to expect Vista or later.
  • The “-A” switch (to generate .BIN files using ANSI encoding instead of Unicode) is considered to be deprecated and will be removed in a future version of mc.exe. Removing this switch will cause the generated .BIN files to use Unicode string encoding.
  • The behavior of the “-A” switch has changed. Prior to Windows 1607 Anniversary Update SDK, when using the -A switch, BIN files were encoded using the build system’s ANSI code page. In the Windows 1607 Anniversary Update SDK, mc.exe’s behavior was inadvertently changed to encode BIN files using the build system’s OEM code page. In the 19H1 SDK, mc.exe’s previous behavior has been restored and it now encodes BIN files using the build system’s ANSI code page. Note that the -A switch is deprecated, as ANSI-encoded BIN files do not provide a consistent user experience in multi-lingual systems.

Breaking Changes

IAppxPackageReader2 has been removed from appxpackaging.h

The interface IAppxPackageReader2 was removed from appxpackaging.h. Eliminate the use of use of IAppxPackageReader2 or use IAppxPackageReader instead.

Change to effect graph of the AcrylicBrush

In this Preview SDK we’ll be adding a blend mode to the effect graph of the AcrylicBrush called Luminosity. This blend mode will ensure that shadows do not appear behind acrylic surfaces without a cutout. We will also be exposing a LuminosityBlendOpacity API available for tweaking that allows for more AcrylicBrush customization.

By default, for those that have not specified any LuminosityBlendOpacity on their AcrylicBrushes, we have implemented some logic to ensure that the Acrylic will look as similar as it can to current 1809 acrylics. Please note that we will be updating our default brushes to account for this recipe change.

TraceLoggingProvider.h  / TraceLoggingWrite

Events generated by TraceLoggingProvider.h (e.g. via TraceLoggingWrite macros) will now always have Id and Version set to 0.

Previously, TraceLoggingProvider.h would assign IDs to events at link time. These IDs were unique within a DLL or EXE, but changed from build to build and from module to module.

API Updates, Additions and Removals

Additions:


namespace Windows.AI.MachineLearning {
  public sealed class LearningModelSession : IClosable {
    public LearningModelSession(LearningModel model, LearningModelDevice deviceToRunOn, LearningModelSessionOptions learningModelSessionOptions);
  }
  public sealed class LearningModelSessionOptions
  public sealed class TensorBoolean : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorBoolean CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorBoolean CreateFromShapeArrayAndDataArray(long[] shape, bool[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorDouble : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorDouble CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorDouble CreateFromShapeArrayAndDataArray(long[] shape, double[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorFloat : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorFloat CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorFloat CreateFromShapeArrayAndDataArray(long[] shape, float[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorFloat16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorFloat16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorFloat16Bit CreateFromShapeArrayAndDataArray(long[] shape, float[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt16Bit CreateFromShapeArrayAndDataArray(long[] shape, short[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt32Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt32Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt32Bit CreateFromShapeArrayAndDataArray(long[] shape, int[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt64Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt64Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt64Bit CreateFromShapeArrayAndDataArray(long[] shape, long[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt8Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt8Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt8Bit CreateFromShapeArrayAndDataArray(long[] shape, byte[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorString : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorString CreateFromShapeArrayAndDataArray(long[] shape, string[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt16Bit CreateFromShapeArrayAndDataArray(long[] shape, ushort[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt32Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt32Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt32Bit CreateFromShapeArrayAndDataArray(long[] shape, uint[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt64Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt64Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt64Bit CreateFromShapeArrayAndDataArray(long[] shape, ulong[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt8Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt8Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt8Bit CreateFromShapeArrayAndDataArray(long[] shape, byte[] data);
    IMemoryBufferReference CreateReference();
  }
}
namespace Windows.ApplicationModel {
  public sealed class Package {
    StorageFolder EffectiveLocation { get; }
    StorageFolder MutableLocation { get; }
  }
}
namespace Windows.ApplicationModel.AppService {
  public sealed class AppServiceConnection : IClosable {
    public static IAsyncOperation<StatelessAppServiceResponse> SendStatelessMessageAsync(AppServiceConnection connection, RemoteSystemConnectionRequest connectionRequest, ValueSet message);
  }
  public sealed class AppServiceTriggerDetails {
    string CallerRemoteConnectionToken { get; }
  }
  public sealed class StatelessAppServiceResponse
  public enum StatelessAppServiceResponseStatus
}
namespace Windows.ApplicationModel.Background {
  public sealed class ConversationalAgentTrigger : IBackgroundTrigger
}
namespace Windows.ApplicationModel.Calls {
  public sealed class PhoneLine {
    string TransportDeviceId { get; }
    void EnableTextReply(bool value);
  }
  public enum PhoneLineTransport {
    Bluetooth = 2,
  }
  public sealed class PhoneLineTransportDevice
}
namespace Windows.ApplicationModel.Calls.Background {
  public enum PhoneIncomingCallDismissedReason
  public sealed class PhoneIncomingCallDismissedTriggerDetails
  public enum PhoneTriggerType {
   IncomingCallDismissed = 6,
  }
}
namespace Windows.ApplicationModel.Calls.Provider {
  public static class PhoneCallOriginManager {
    public static bool IsSupported { get; }
  }
}
namespace Windows.ApplicationModel.ConversationalAgent {
  public sealed class ConversationalAgentSession : IClosable
  public sealed class ConversationalAgentSessionInterruptedEventArgs
  public enum ConversationalAgentSessionUpdateResponse
  public sealed class ConversationalAgentSignal
  public sealed class ConversationalAgentSignalDetectedEventArgs
  public enum ConversationalAgentState
  public sealed class ConversationalAgentSystemStateChangedEventArgs
  public enum ConversationalAgentSystemStateChangeType
}
namespace Windows.ApplicationModel.Preview.Holographic {
  public sealed class HolographicKeyboardPlacementOverridePreview
}
namespace Windows.ApplicationModel.Resources {
  public sealed class ResourceLoader {
    public static ResourceLoader GetForUIContext(UIContext context);
  }
}
namespace Windows.ApplicationModel.Resources.Core {
  public sealed class ResourceCandidate {
    ResourceCandidateKind Kind { get; }
  }
  public enum ResourceCandidateKind
  public sealed class ResourceContext {
    public static ResourceContext GetForUIContext(UIContext context);
  }
}
namespace Windows.ApplicationModel.UserActivities {
  public sealed class UserActivityChannel {
    public static UserActivityChannel GetForUser(User user);
  }
}
namespace Windows.Devices.Bluetooth.GenericAttributeProfile {
  public enum GattServiceProviderAdvertisementStatus {
    StartedWithoutAllAdvertisementData = 4,
  }
  public sealed class GattServiceProviderAdvertisingParameters {
    IBuffer ServiceData { get; set; }
  }
}
namespace Windows.Devices.Enumeration {
  public enum DevicePairingKinds : uint {
    ProvidePasswordCredential = (uint)16,
  }
  public sealed class DevicePairingRequestedEventArgs {
    void AcceptWithPasswordCredential(PasswordCredential passwordCredential);
  }
}
namespace Windows.Devices.Input {
  public sealed class PenDevice
}
namespace Windows.Devices.PointOfService {
  public sealed class JournalPrinterCapabilities : ICommonPosPrintStationCapabilities {
    bool IsReversePaperFeedByLineSupported { get; }
    bool IsReversePaperFeedByMapModeUnitSupported { get; }
    bool IsReverseVideoSupported { get; }
    bool IsStrikethroughSupported { get; }
    bool IsSubscriptSupported { get; }
    bool IsSuperscriptSupported { get; }
  }
  public sealed class JournalPrintJob : IPosPrinterJob {
    void FeedPaperByLine(int lineCount);
    void FeedPaperByMapModeUnit(int distance);
    void Print(string data, PosPrinterPrintOptions printOptions);
  }
  public sealed class PosPrinter : IClosable {
    IVectorView<uint> SupportedBarcodeSymbologies { get; }
    PosPrinterFontProperty GetFontProperty(string typeface);
  }
  public sealed class PosPrinterFontProperty
  public sealed class PosPrinterPrintOptions
  public sealed class ReceiptPrinterCapabilities : ICommonPosPrintStationCapabilities, ICommonReceiptSlipCapabilities {
    bool IsReversePaperFeedByLineSupported { get; }
    bool IsReversePaperFeedByMapModeUnitSupported { get; }
    bool IsReverseVideoSupported { get; }
    bool IsStrikethroughSupported { get; }
    bool IsSubscriptSupported { get; }
    bool IsSuperscriptSupported { get; }
  }
  public sealed class ReceiptPrintJob : IPosPrinterJob, IReceiptOrSlipJob {
    void FeedPaperByLine(int lineCount);
    void FeedPaperByMapModeUnit(int distance);
    void Print(string data, PosPrinterPrintOptions printOptions);
    void StampPaper();
  }
  public struct SizeUInt32
  public sealed class SlipPrinterCapabilities : ICommonPosPrintStationCapabilities, ICommonReceiptSlipCapabilities {
    bool IsReversePaperFeedByLineSupported { get; }
    bool IsReversePaperFeedByMapModeUnitSupported { get; }
    bool IsReverseVideoSupported { get; }
    bool IsStrikethroughSupported { get; }
    bool IsSubscriptSupported { get; }
    bool IsSuperscriptSupported { get; }
  }
  public sealed class SlipPrintJob : IPosPrinterJob, IReceiptOrSlipJob {
    void FeedPaperByLine(int lineCount);
    void FeedPaperByMapModeUnit(int distance);
    void Print(string data, PosPrinterPrintOptions printOptions);
  }
}
namespace Windows.Globalization {
  public sealed class CurrencyAmount
}
namespace Windows.Graphics.DirectX {
  public enum DirectXPrimitiveTopology
}
namespace Windows.Graphics.Holographic {
  public sealed class HolographicCamera {
    HolographicViewConfiguration ViewConfiguration { get; }
  }
  public sealed class HolographicDisplay {
    HolographicViewConfiguration TryGetViewConfiguration(HolographicViewConfigurationKind kind);
  }
  public sealed class HolographicViewConfiguration
  public enum HolographicViewConfigurationKind
}
namespace Windows.Management.Deployment {
  public enum AddPackageByAppInstallerOptions : uint {
    LimitToExistingPackages = (uint)512,
  }
  public enum DeploymentOptions : uint {
    RetainFilesOnFailure = (uint)2097152,
  }
}
namespace Windows.Media.Devices {
  public sealed class InfraredTorchControl
  public enum InfraredTorchMode
  public sealed class VideoDeviceController : IMediaDeviceController {
    InfraredTorchControl InfraredTorchControl { get; }
  }
}
namespace Windows.Media.Miracast {
  public sealed class MiracastReceiver
  public sealed class MiracastReceiverApplySettingsResult
  public enum MiracastReceiverApplySettingsStatus
  public enum MiracastReceiverAuthorizationMethod
  public sealed class MiracastReceiverConnection : IClosable
  public sealed class MiracastReceiverConnectionCreatedEventArgs
  public sealed class MiracastReceiverCursorImageChannel
  public sealed class MiracastReceiverCursorImageChannelSettings
  public sealed class MiracastReceiverDisconnectedEventArgs
  public enum MiracastReceiverDisconnectReason
  public sealed class MiracastReceiverGameControllerDevice
  public enum MiracastReceiverGameControllerDeviceUsageMode
  public sealed class MiracastReceiverInputDevices
  public sealed class MiracastReceiverKeyboardDevice
  public enum MiracastReceiverListeningStatus
  public sealed class MiracastReceiverMediaSourceCreatedEventArgs
  public sealed class MiracastReceiverSession : IClosable
  public sealed class MiracastReceiverSessionStartResult
  public enum MiracastReceiverSessionStartStatus
  public sealed class MiracastReceiverSettings
  public sealed class MiracastReceiverStatus
  public sealed class MiracastReceiverStreamControl
  public sealed class MiracastReceiverVideoStreamSettings
  public enum MiracastReceiverWiFiStatus
  public sealed class MiracastTransmitter
  public enum MiracastTransmitterAuthorizationStatus
}
namespace Windows.Networking.Connectivity {
  public enum NetworkAuthenticationType {
    Wpa3 = 10,
    Wpa3Sae = 11,
  }
}
namespace Windows.Networking.NetworkOperators {
  public sealed class ESim {
    ESimDiscoverResult Discover();
    ESimDiscoverResult Discover(string serverAddress, string matchingId);
    IAsyncOperation<ESimDiscoverResult> DiscoverAsync();
    IAsyncOperation<ESimDiscoverResult> DiscoverAsync(string serverAddress, string matchingId);
  }
  public sealed class ESimDiscoverEvent
  public sealed class ESimDiscoverResult
  public enum ESimDiscoverResultKind
}
namespace Windows.Perception.People {
  public sealed class EyesPose
  public enum HandJointKind
  public sealed class HandMeshObserver
  public struct HandMeshVertex
  public sealed class HandMeshVertexState
  public sealed class HandPose
  public struct JointPose
  public enum JointPoseAccuracy
}
namespace Windows.Perception.Spatial {
  public struct SpatialRay
}
namespace Windows.Perception.Spatial.Preview {
  public sealed class SpatialGraphInteropFrameOfReferencePreview
  public static class SpatialGraphInteropPreview {
    public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem);
    public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem, Vector3 relativePosition);
    public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem, Vector3 relativePosition, Quaternion relativeOrientation);
  }
}
namespace Windows.Security.Authorization.AppCapabilityAccess {
  public sealed class AppCapability
  public sealed class AppCapabilityAccessChangedEventArgs
  public enum AppCapabilityAccessStatus
}
namespace Windows.Security.DataProtection {
  public enum UserDataAvailability
  public sealed class UserDataAvailabilityStateChangedEventArgs
  public sealed class UserDataBufferUnprotectResult
  public enum UserDataBufferUnprotectStatus
  public sealed class UserDataProtectionManager
  public sealed class UserDataStorageItemProtectionInfo
  public enum UserDataStorageItemProtectionStatus
}
namespace Windows.Storage.AccessCache {
  public static class StorageApplicationPermissions {
    public static StorageItemAccessList GetFutureAccessListForUser(User user);
    public static StorageItemMostRecentlyUsedList GetMostRecentlyUsedListForUser(User user);
  }
}
namespace Windows.Storage.Pickers {
  public sealed class FileOpenPicker {
    User User { get; }
    public static FileOpenPicker CreateForUser(User user);
  }
  public sealed class FileSavePicker {
    User User { get; }
    public static FileSavePicker CreateForUser(User user);
  }
  public sealed class FolderPicker {
    User User { get; }
    public static FolderPicker CreateForUser(User user);
  }
}
namespace Windows.System {
  public sealed class DispatcherQueue {
    bool HasThreadAccess { get; }
  }
  public enum ProcessorArchitecture {
    Arm64 = 12,
    X86OnArm64 = 14,
  }
}
namespace Windows.System.Profile {
  public static class AppApplicability
  public sealed class UnsupportedAppRequirement
  public enum UnsupportedAppRequirementReasons : uint
}
namespace Windows.System.RemoteSystems {
  public sealed class RemoteSystem {
    User User { get; }
    public static RemoteSystemWatcher CreateWatcherForUser(User user);
    public static RemoteSystemWatcher CreateWatcherForUser(User user, IIterable<IRemoteSystemFilter> filters);
  }
  public sealed class RemoteSystemApp {
    string ConnectionToken { get; }
    User User { get; }
  }
  public sealed class RemoteSystemConnectionRequest {
    string ConnectionToken { get; }
    public static RemoteSystemConnectionRequest CreateFromConnectionToken(string connectionToken);
    public static RemoteSystemConnectionRequest CreateFromConnectionTokenForUser(User user, string connectionToken);
  }
  public sealed class RemoteSystemWatcher {
    User User { get; }
  }
}
namespace Windows.UI {
  public sealed class UIContentRoot
  public sealed class UIContext
}
namespace Windows.UI.Composition {
  public enum CompositionBitmapInterpolationMode {
    MagLinearMinLinearMipLinear = 2,
    MagLinearMinLinearMipNearest = 3,
    MagLinearMinNearestMipLinear = 4,
    MagLinearMinNearestMipNearest = 5,
    MagNearestMinLinearMipLinear = 6,
    MagNearestMinLinearMipNearest = 7,
    MagNearestMinNearestMipLinear = 8,
    MagNearestMinNearestMipNearest = 9,
  }
  public sealed class CompositionGraphicsDevice : CompositionObject {
    CompositionMipmapSurface CreateMipmapSurface(SizeInt32 sizePixels, DirectXPixelFormat pixelFormat, DirectXAlphaMode alphaMode);
    void Trim();
  }
  public sealed class CompositionMipmapSurface : CompositionObject, ICompositionSurface
  public sealed class CompositionProjectedShadow : CompositionObject
  public sealed class CompositionProjectedShadowCaster : CompositionObject
  public sealed class CompositionProjectedShadowCasterCollection : CompositionObject, IIterable<CompositionProjectedShadowCaster>
  public sealed class CompositionProjectedShadowReceiver : CompositionObject
  public sealed class CompositionProjectedShadowReceiverUnorderedCollection : CompositionObject, IIterable<CompositionProjectedShadowReceiver>
  public sealed class CompositionRadialGradientBrush : CompositionGradientBrush
  public sealed class CompositionSurfaceBrush : CompositionBrush {
    bool SnapToPixels { get; set; }
  }
  public class CompositionTransform : CompositionObject
  public sealed class CompositionVisualSurface : CompositionObject, ICompositionSurface
  public sealed class Compositor : IClosable {
    CompositionProjectedShadow CreateProjectedShadow();
    CompositionProjectedShadowCaster CreateProjectedShadowCaster();
    CompositionProjectedShadowReceiver CreateProjectedShadowReceiver();
    CompositionRadialGradientBrush CreateRadialGradientBrush();
    CompositionVisualSurface CreateVisualSurface();
  }
  public interface IVisualElement
}
namespace Windows.UI.Composition.Interactions {
  public enum InteractionBindingAxisModes : uint
  public sealed class InteractionTracker : CompositionObject {
    public static InteractionBindingAxisModes GetBindingMode(InteractionTracker boundTracker1, InteractionTracker boundTracker2);
    public static void SetBindingMode(InteractionTracker boundTracker1, InteractionTracker boundTracker2, InteractionBindingAxisModes axisMode);
  }
  public sealed class InteractionTrackerCustomAnimationStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public sealed class InteractionTrackerIdleStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public sealed class InteractionTrackerInertiaStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public sealed class InteractionTrackerInteractingStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public class VisualInteractionSource : CompositionObject, ICompositionInteractionSource {
    public static VisualInteractionSource CreateFromIVisualElement(IVisualElement source);
  }
}
namespace Windows.UI.Composition.Scenes {
  public enum SceneAlphaMode
  public enum SceneAttributeSemantic
  public sealed class SceneBoundingBox : SceneObject
  public class SceneComponent : SceneObject
  public sealed class SceneComponentCollection : SceneObject, IIterable<SceneComponent>, IVector<SceneComponent>
  public enum SceneComponentType
  public class SceneMaterial : SceneObject
  public class SceneMaterialInput : SceneObject
  public sealed class SceneMesh : SceneObject
  public sealed class SceneMeshMaterialAttributeMap : SceneObject, IIterable<IKeyValuePair<string, SceneAttributeSemantic>>, IMap<string, SceneAttributeSemantic>
  public sealed class SceneMeshRendererComponent : SceneRendererComponent
  public sealed class SceneMetallicRoughnessMaterial : ScenePbrMaterial
  public sealed class SceneModelTransform : CompositionTransform
  public sealed class SceneNode : SceneObject
  public sealed class SceneNodeCollection : SceneObject, IIterable<SceneNode>, IVector<SceneNode>
  public class SceneObject : CompositionObject
  public class ScenePbrMaterial : SceneMaterial
  public class SceneRendererComponent : SceneComponent
  public sealed class SceneSurfaceMaterialInput : SceneMaterialInput
  public sealed class SceneVisual : ContainerVisual
  public enum SceneWrappingMode
}
namespace Windows.UI.Core {
  public sealed class CoreWindow : ICorePointerRedirector, ICoreWindow {
    UIContext UIContext { get; }
  }
}
namespace Windows.UI.Core.Preview {
  public sealed class CoreAppWindowPreview
}
namespace Windows.UI.Input {
  public class AttachableInputObject : IClosable
  public enum GazeInputAccessStatus
  public sealed class InputActivationListener : AttachableInputObject
  public sealed class InputActivationListenerActivationChangedEventArgs
  public enum InputActivationState
}
namespace Windows.UI.Input.Preview {
  public static class InputActivationListenerPreview
}
namespace Windows.UI.Input.Spatial {
  public sealed class SpatialInteractionManager {
    public static bool IsSourceKindSupported(SpatialInteractionSourceKind kind);
  }
  public sealed class SpatialInteractionSource {
    HandMeshObserver TryCreateHandMeshObserver();
    IAsyncOperation<HandMeshObserver> TryCreateHandMeshObserverAsync();
  }
  public sealed class SpatialInteractionSourceState {
    HandPose TryGetHandPose();
  }
  public sealed class SpatialPointerPose {
    EyesPose Eyes { get; }
    bool IsHeadCapturedBySystem { get; }
  }
}
namespace Windows.UI.Notifications {
  public sealed class ToastActivatedEventArgs {
    ValueSet UserInput { get; }
  }
  public sealed class ToastNotification {
    bool ExpiresOnReboot { get; set; }
  }
}
namespace Windows.UI.ViewManagement {
  public sealed class ApplicationView {
    string PersistedStateId { get; set; }
    UIContext UIContext { get; }
    WindowingEnvironment WindowingEnvironment { get; }
    public static void ClearAllPersistedState();
    public static void ClearPersistedState(string key);
    IVectorView<DisplayRegion> GetDisplayRegions();
  }
  public sealed class InputPane {
    public static InputPane GetForUIContext(UIContext context);
  }
  public sealed class UISettings {
    bool AutoHideScrollBars { get; }
    event TypedEventHandler<UISettings, UISettingsAutoHideScrollBarsChangedEventArgs> AutoHideScrollBarsChanged;
  }
  public sealed class UISettingsAutoHideScrollBarsChangedEventArgs
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    public static CoreInputView GetForUIContext(UIContext context);
  }
}
namespace Windows.UI.WindowManagement {
  public sealed class AppWindow
  public sealed class AppWindowChangedEventArgs
  public sealed class AppWindowClosedEventArgs
  public enum AppWindowClosedReason
  public sealed class AppWindowCloseRequestedEventArgs
  public sealed class AppWindowFrame
  public enum AppWindowFrameStyle
  public sealed class AppWindowPlacement
  public class AppWindowPresentationConfiguration
  public enum AppWindowPresentationKind
  public sealed class AppWindowPresenter
  public sealed class AppWindowTitleBar
  public sealed class AppWindowTitleBarOcclusion
  public enum AppWindowTitleBarVisibility
  public sealed class CompactOverlayPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class DefaultPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class DisplayRegion
  public sealed class FullScreenPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class WindowingEnvironment
  public sealed class WindowingEnvironmentAddedEventArgs
 public sealed class WindowingEnvironmentChangedEventArgs
  public enum WindowingEnvironmentKind
  public sealed class WindowingEnvironmentRemovedEventArgs
}
namespace Windows.UI.WindowManagement.Preview {
  public sealed class WindowManagementPreview
}
namespace Windows.UI.Xaml {
  public class UIElement : DependencyObject, IAnimationObject, IVisualElement {
    Vector3 ActualOffset { get; }
    Vector2 ActualSize { get; }
    Shadow Shadow { get; set; }
    public static DependencyProperty ShadowProperty { get; }
    UIContext UIContext { get; }
    XamlRoot XamlRoot { get; set; }
  }
  public class UIElementWeakCollection : IIterable<UIElement>, IVector<UIElement>
  public sealed class Window {
    UIContext UIContext { get; }
  }
  public sealed class XamlRoot
  public sealed class XamlRootChangedEventArgs
}
namespace Windows.UI.Xaml.Controls {
  public sealed class DatePickerFlyoutPresenter : Control {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public class FlyoutPresenter : ContentControl {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public class InkToolbar : Control {
    InkPresenter TargetInkPresenter { get; set; }
    public static DependencyProperty TargetInkPresenterProperty { get; }
  }
  public class MenuFlyoutPresenter : ItemsControl {
    bool IsDefaultShadowEnabled { get; set; }
   public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public sealed class TimePickerFlyoutPresenter : Control {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public class TwoPaneView : Control
  public enum TwoPaneViewMode
  public enum TwoPaneViewPriority
  public enum TwoPaneViewTallModeConfiguration
  public enum TwoPaneViewWideModeConfiguration
}
namespace Windows.UI.Xaml.Controls.Maps {
  public sealed class MapControl : Control {
    bool CanTiltDown { get; }
    public static DependencyProperty CanTiltDownProperty { get; }
    bool CanTiltUp { get; }
    public static DependencyProperty CanTiltUpProperty { get; }
    bool CanZoomIn { get; }
    public static DependencyProperty CanZoomInProperty { get; }
    bool CanZoomOut { get; }
    public static DependencyProperty CanZoomOutProperty { get; }
  }
  public enum MapLoadingStatus {
    DownloadedMapsManagerUnavailable = 3,
  }
}
namespace Windows.UI.Xaml.Controls.Primitives {
  public sealed class AppBarTemplateSettings : DependencyObject {
    double NegativeCompactVerticalDelta { get; }
    double NegativeHiddenVerticalDelta { get; }
    double NegativeMinimalVerticalDelta { get; }
  }
  public sealed class CommandBarTemplateSettings : DependencyObject {
    double OverflowContentCompactYTranslation { get; }
    double OverflowContentHiddenYTranslation { get; }
    double OverflowContentMinimalYTranslation { get; }
  }
  public class FlyoutBase : DependencyObject {
    bool IsConstrainedToRootBounds { get; }
    bool ShouldConstrainToRootBounds { get; set; }
    public static DependencyProperty ShouldConstrainToRootBoundsProperty { get; }
    XamlRoot XamlRoot { get; set; }
  }
  public sealed class Popup : FrameworkElement {
    bool IsConstrainedToRootBounds { get; }
    bool ShouldConstrainToRootBounds { get; set; }
    public static DependencyProperty ShouldConstrainToRootBoundsProperty { get; }
  }
}
namespace Windows.UI.Xaml.Core.Direct {
  public enum XamlPropertyIndex {
    AppBarTemplateSettings_NegativeCompactVerticalDelta = 2367,
    AppBarTemplateSettings_NegativeHiddenVerticalDelta = 2368,
    AppBarTemplateSettings_NegativeMinimalVerticalDelta = 2369,
    CommandBarTemplateSettings_OverflowContentCompactYTranslation = 2384,
    CommandBarTemplateSettings_OverflowContentHiddenYTranslation = 2385,
    CommandBarTemplateSettings_OverflowContentMinimalYTranslation = 2386,
    FlyoutBase_ShouldConstrainToRootBounds = 2378,
    FlyoutPresenter_IsDefaultShadowEnabled = 2380,
    MenuFlyoutPresenter_IsDefaultShadowEnabled = 2381,
    Popup_ShouldConstrainToRootBounds = 2379,
    ThemeShadow_Receivers = 2279,
    UIElement_ActualOffset = 2382,
    UIElement_ActualSize = 2383,
    UIElement_Shadow = 2130,
  }
  public enum XamlTypeIndex {
    ThemeShadow = 964,
  }
}
namespace Windows.UI.Xaml.Documents {
  public class TextElement : DependencyObject {
    XamlRoot XamlRoot { get; set; }
  }
}
namespace Windows.UI.Xaml.Hosting {
  public sealed class ElementCompositionPreview {
    public static UIElement GetAppWindowContent(AppWindow appWindow);
    public static void SetAppWindowContent(AppWindow appWindow, UIElement xamlContent);
  }
}
namespace Windows.UI.Xaml.Input {
  public sealed class FocusManager {
    public static object GetFocusedElement(XamlRoot xamlRoot);
  }
  public class StandardUICommand : XamlUICommand {
    StandardUICommandKind Kind { get; set; }
  }
}
namespace Windows.UI.Xaml.Media {
  public class AcrylicBrush : XamlCompositionBrushBase {
    IReference<double> TintLuminosityOpacity { get; set; }
    public static DependencyProperty TintLuminosityOpacityProperty { get; }
  }
  public class Shadow : DependencyObject
  public class ThemeShadow : Shadow
  public sealed class VisualTreeHelper {
    public static IVectorView<Popup> GetOpenPopupsForXamlRoot(XamlRoot xamlRoot);
  }
}
namespace Windows.UI.Xaml.Media.Animation {
  public class GravityConnectedAnimationConfiguration : ConnectedAnimationConfiguration {
    bool IsShadowEnabled { get; set; }
  }
}
namespace Windows.Web.Http {
  public sealed class HttpClient : IClosable, IStringable {
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryDeleteAsync(Uri uri);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri);
   IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri, HttpCompletionOption completionOption);
    IAsyncOperationWithProgress<HttpGetBufferResult, HttpProgress> TryGetBufferAsync(Uri uri);
    IAsyncOperationWithProgress<HttpGetInputStreamResult, HttpProgress> TryGetInputStreamAsync(Uri uri);
    IAsyncOperationWithProgress<HttpGetStringResult, HttpProgress> TryGetStringAsync(Uri uri);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPostAsync(Uri uri, IHttpContent content);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPutAsync(Uri uri, IHttpContent content);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request, HttpCompletionOption completionOption);
  }
  public sealed class HttpGetBufferResult : IClosable, IStringable
  public sealed class HttpGetInputStreamResult : IClosable, IStringable
  public sealed class HttpGetStringResult : IClosable, IStringable
  public sealed class HttpRequestResult : IClosable, IStringable
}
namespace Windows.Web.Http.Filters {
  public sealed class HttpBaseProtocolFilter : IClosable, IHttpFilter {
    User User { get; }
    public static HttpBaseProtocolFilter CreateForUser(User user);
  }
}

The post Windows 10 SDK Preview Build 18356 available now! appeared first on Windows Developer Blog.

Linker Throughput Improvement in Visual Studio 2019

$
0
0

In Visual Studio 2019 Preview 2 we made the compiler back-end to prune away debug information that is unrelated to code or data emitted into binary and changed certain hash implementations in the PDB engine, to improve linker throughput, which resulted in more than 2x reduction on link time for some large AAA game title.

Debug Info Pruning

This is to have the compiler back-end prune away debug info of any user defined types (UDTs) that are not referenced by any symbol record. This cuts down the size of OBJ sections holding debug info, like .debug$S which holds debug records for symbols and .debug$T which holds debug records for types if /Z7 is used. When /Zi or /ZI is used, the compiler will write debug info for types into one PDB file which is usually set to be shared by multiple compilations of all source files under one directory. In this case we don’t prune away types from the compiler generated PDB but will only remove S_UDT records from the .debug$S sections if underlying UDTs are not referenced by any symbol. With smaller debug sections in OBJs and LIBs there is less work to do on type merging and symbol processing when to generate PDB, and therefore it speeds up linking because PDB generation usually takes the majority of link time. The linker aggressively does memory mapped file I/O, and therefore smaller OBJs and LIBs alleviate pressure on virtual memory, which is crucial for link speed when working on big binaries like those in game development.

Type pruning done by the compiler is not free and degrades compilation throughput, especially when the compiler needs to generate a PDB under option /Zi or /ZI and the PDB server (mspdbsrv.exe) is in use for some reason, like the use of /MP or in a smart build system where the build driver kicks off multiple compilations targeting the same PDB file at one time. Since linking is usually the biggest bottleneck in build throughput, we have made type pruning on by default when mspdbsrv.exe is not used in compilation. We think this is a good tradeoff, since compilations can be easily done in parallel. And in development iteration (edit-build-debug) cycle, where usually only a small portion of source files need to be re-compiled, link time becomes dominating in overall build time. If you want to force enabling it in the case where mspdbsrv.exe will be involved, add compiler option /d2prunedbinfo.

Type and Global Symbol Hash Improvement in PDB

The PDB file stores various hashes on types for convenience of adding new type records into an existing PDB file and for type querying at debug or profile time. The PDB file format has been around for more than 25 years and there are lots of tools built by Microsoft and other companies that deal with PDBs. While the type hashes in today’s PDB are inefficient to handle a large amount of types, we don’t want to simply switch to an efficient hash with different structures, so to maintain compatibility on PDB format. In Preview 2 we use xxhash to check whether a given type is unique. When type merging is done and it is time to commit everything into PDB file on disk, we then rebuild the hashes used in today’s PDB file and write them out. xxhash is extremely fast. Though it doesn’t meet the security requirement for cryptographic applications, the hash function has a good measure of quality and we use it here only for uniqueness checking.

Similar to how type merging throughput is improved, we now make the linker communicate the number of public symbols to PDB, so the PDB engine can set up a hash table with a sufficient number of buckets which results in far fewer hash collisions. Same as type merging, we need to convert the in-memory version of hash into on-disk format before committing it into PDB.

In Preview 2 the improvements on internal PDB hashes are only effective when generating a PDB from scratch, since reading records out of an existing PDB and rebuilding fast in-memory version of hashes is expensive, the overhead of which offsets possible gain resulted from processing types and symbols with fast hashes.

Results

Here is the comparison between the latest Visual Studio 2017 15.9 Update release and Visual Studio 2019 Preview 2. We built one AAA game title and Google’s Chrome. In the tables below, the first two rows with numbers are for link time in the unit of seconds and the last row is for size of total input to the linker in the unit of bytes:

AAA Game Title
Link time (seconds) VS 2017 15.9 Update (base) VS 2019 Preview 2 (diff) base/diff (higher is better)
/DEBUG:full 392.1 163.3 2.4
/DEBUG:fastlink 72.3 31.2 2.32
Input size (bytes) 12,882,624,412 8,131,565,290 1.58

 

Google Chrome (x64 release build)
Link time (seconds) VS 2017 15.9 Update (base) VS 2019 Preview 2 (diff) base/diff (higher is better)
/DEBUG:full 126.8 71.9 1.76
/DEBUG:fastlink 30.3 21.5 1.41
Input size (bytes) 5,858,077,238 5,442,644,550 1.08

 

Google Chrome (x86 debug build)
Link time (seconds) VS 2017 15.9 Update (base) VS 2019 Preview 2 (diff) base/diff (higher is better)
/DEBUG:full 232.6 106.9 2.18
/DEBUG:fastlink 43.8 38.8 1.13
Input size (bytes) 8,384,258,922 7,962,819,862 1.05

 

We don’t see as large a linker input size reduction when building Chrome as when building AAA game title, because the compilation for Chrome is using /Zi, for which the compiler writes types into PDB file, while the compilation of AAA game title is using /Z7, for which type records are written into .debug$T sections in OBJs and unreferenced ones will be pruned away. We would also see that full PDB link time tends to benefit more from the improvements than fastlink PDB link time. This is because fastlink PDB generation doesn’t involve type merging and creation of global symbols, and therefore the latter two improvements don’t apply. Type pruning done by the compiler benefits both kinds of linking by reducing raw amount of work on debug records that the linker has to do to produce PDB.

Closing Remarks

We know build throughput is important for developers and we are continuing to improve our toolset’s performance. For next few releases we will be working on reducing compiler throughput cost on pruning unreferenced types as well as continuous improvement on various PDB internal hashes. If you have feedback or suggestions for us, let us know. We can be reached via comments below, via email (visualcpp@microsoft.com), or you can provide feedback via Help -> Report a Problem in the Product in Visual Studio IDE, or via Developer Community. You can also find us on Twitter (@VisualC) and Facebook (msftvisualcpp).

 

The post Linker Throughput Improvement in Visual Studio 2019 appeared first on C++ Team Blog.

Windows Virtual Desktop now in public preview on Azure

$
0
0

We recently shared the public preview of the Windows Virtual Desktop service on Azure. Now customers can access the only service that delivers simplified management, multi-session Windows 10, optimizations for Office 365 ProPlus, and support for Windows Server Remote Desktop Services (RDS) desktops and apps. With Windows Virtual Desktop, you can deploy and scale your Windows desktops and apps on Azure in minutes, while enjoying built-in security and compliance.

Image of women on her desktop in the workplace

This means customers can now virtualize using multi-session Windows 10, Windows 7, and Windows Server desktops and apps (RDS) to Windows Virtual Desktop for a simplified management and deployment experience with Azure. We also built Windows Virtual Desktop as an extensible solution for our partners, including Citrix, Samsung, and Microsoft Cloud Solution Providers (CSP).

Access to Windows Virtual Desktop is available through applicable RDS and Windows Enterprise licenses. With the appropriate license, you just need to set up an Azure subscription to get started today. You can choose the type of virtual machines and storage you want to suit your environment. You can optimize costs by taking advantage of Reserved Instances with up to a 72 percent discount and using multi-session Windows 10.

You can read more detail about Windows Virtual Desktop in the Microsoft 365 blog published today by Julia White and Brad Anderson.

Get started with the public preview today.

Helping IT reduce costs, increase security, and boost employee productivity

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>