Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Account failover now in public preview for Azure Storage

$
0
0

Today we are excited to share the preview for account failover for customers with geo-redundant storage (GRS) enabled storage accounts. Customers using GRS or RA-GRS accounts can take advantage of this functionality to control when to failover from the primary region to the secondary region for their storage accounts.

Customers have told us that they wish to control storage account failover so they can determine when storage account write access is required and the secondary replication state is understood. 

If the primary region for your geo-redundant storage account becomes unavailable for an extended period of time, you can force an account failover. When you perform a failover, all data in the storage account is failed over to the secondary region, and the secondary region becomes the new primary region. The DNS records for all storage service endpoints – blob, Azure Data Lake Storage Gen2, file, queue, and table – are updated to point to the new primary region. Once the failover is complete, clients can automatically begin writing data to the storage account using the service endpoints in the new primary region, without any code changes.

The diagram below shows how account failover works. Under normal circumstances, a client writes data to a geo-redundant storage account (GRS or RA-GRS) in the primary region, and that data is replicated asynchronously to the secondary region. If write operations to the primary region fail consistently then you can trigger the failover.

Account failover diagram

After the failover is complete, write operations can resume against the new primary service endpoints.

Post failover, the storage account is configured to be locally redundant (LRS). To resume replication to the new secondary region, configure the account to use geo-redundant storage again (either RA-GRS or GRS). Keep in mind that converting an locally-redundant (LRS) account to RA-GRS or GRS incurs a cost.

Account failover is supported in preview for new and existing Azure Resource Manager storage accounts that are configured for RA-GRS and GRS. Storage accounts may be general-purpose v1 (GPv1), general-purpose v2 (GPv2), or Blob Storage accounts. Account failover is currently supported in US-West 2 and US-West Central.

You can initiate account failover using the Azure portal, Azure PowerShell, Azure CLI, or the Azure Storage Resource Provider API. The process is simple and easy to perform. The image below shows how to trigger account failover in the Azure portal in one step.

Geo-replication in the Azure portal

As is the case with most previews, account failover should not be used with production workloads. There is no production SLA until the feature becomes generally available.

It's important to note that account failover often results in some data loss, because geo-replication always involves latency. The secondary endpoint is typically behind the primary endpoint. So when you initiate a failover, any data that has not yet been replicated to the secondary region will be lost.

We recommend that you always check the Last Sync Time property before initiating a failover to evaluate how far the secondary is behind the primary. To understand the implications of account failover and learn more about the feature, please read the documentation, “What to do if an Azure Storage outage occurs.”

For questions about participation in the preview or about account failover, contact xstoredr@microsoft.com. We welcome your feedback on the account failover feature and documentation!


Microsoft Azure portal February 2019 update

$
0
0

This month we’re bringing you updates to several compute (IaaS) resources, the ability to export contents of lists of resources and resource groups as CSV files, an improvement to the layout of essential properties on overview pages, enhancements to the experience on recovery services pages, and expansions of setting options in Microsoft Intune.

Sign in to the Azure portal now and see for yourself everything that’s new. You can also download the Azure mobile app.

Here is a list of February updates to the Azure portal:

Compute (laaS)

Shell

Site Recovery

Other

Let’s look at each of these updates in detail.

Compute (laaS)

Add a new VM directly to an application gateway or load balancer

We learned from you that a common scenario involves adding a new VM to a load balanced set such as setting up a SharePoint form or putting together a three-tier web application. You can now add a new VM to an existing load balancing solution during the VM creation process. When you specify networking parameters for your virtual machine, you can now choose to add it to the backend pool of an application gateway for HTTP and HTTPS traffic or load balancer Standard SKU for all TCP and UDP traffic.

Create a virtual machine

Migrate classic VMs to Azure Resource Manager

The Azure Resource Manager (ARM) deployment model was released nearly three years ago, and many features have been added since then that are exclusive to ARM. The Azure platform supports migrating classic Azure Service Manager (ASM) resources to ARM, and you can now use the Azure portal to migrate existing infrastructure virtual machines, virtual networks, and storage accounts to the modern ARM deployment model.

Screenshot of Azure Service Manager resources migrating to Azure Resource Manager

Navigate to a classic virtual machine, and select Migrate to ARM from the Resource menu under Settings.

VMSS password reset

You can now use the portal to reset the password of virtual machine scale set instances.

Screenshot of Azure portal reset to virtual machine scale set instances password

Navigate to a virtual machine scale set in the Azure portal, and select Reset password.

Shell

Export as CSV in All resources and Resource groups

We have recently added the ability to export the contents of lists of resources and resource groups to a CSV (comma separated values) file.

This capability is available in the All resources screen:

All resources screenshot

It is also available also in the Resource groups screen:

Resource group screenshots

We have added this capability to an instance of the Resource group screen, so you can download all the resources within a single resource group to a CSV file:

Downloading CSV file with single resource group containing all resources

Layout change for essential properties on overview pages

We’ve changed the way that essential properties are laid out on overview pages so there’s less vertical scrolling required now. On standard wide screen resolutions, the essential properties (key/value) will be laid out horizontally rather than vertically to save vertical space. However, you will still get the vertical layout if the essential properties do not have enough horizontal space to avoid truncation and/or ellipsis of the important information.

Screenshot of overview page and new properties layout

  1. Select Virtual Machines within the menu on the left.
  2. Select any virtual machine.

Site Recovery

Azure Site Recovery UI updates

The new enhanced IaaS VM disaster recovery multiple tab experience lets you configure the replication with a single click. It’s as simple as selecting the Target region.

Screenshot of disaster recovery experience

  1. Select any virtual machine.
  2. Select Disaster recovery within the menu located on the left.
  3. Select Target region.
  4. Select Review + Start replication.

We also now have a new immersive experience for Site Recovery infrastructure with the addition of an overview tab.

Screenshot of Site Recovery infrastructure

  1. Select any Recovery service vault.
  2. Select Site Recovery infrastructure under the subheading Manage.

Other

Updates to Microsoft Intune

The Microsoft Intune team has been hard at work on updates. You can find a complete list on the What's new in Microsoft Intune page, including changes that affect your experience using Intune.

Did you know?

You can always test features by visiting the preview version of Azure portal.

Next steps

Thank you for all your terrific feedback. The Azure portal is built by a large team of engineers who are always interested in hearing from you.

We recently launched the Azure portal “how to” series where you can learn about a specific feature of the portal in order to become more productive using it. To learn more please watch the videos “How to manage multiple accounts, directories, and subscriptions in Azure” and “How to create a virtual machine in Azure.” Keep checking in on the Azure YouTube channel for new videos each week.

If you’re interested in learning how we streamlined resource creation in Microsoft Azure to improve usability, consistency, and accessibility, read the new Medium article, “Creation at Cloud Scale.” If you’re curious to learn more about how the Azure portal is built, be sure to watch the Microsoft Ignite 2018 session, “Building a scalable solution to millions of users.”

Don’t forget to sign in on the Azure portal and download the Azure mobile app today to see everything that’s new. Let us know your feedback in the comments section or on Twitter. See you next month.

Investing in our partners’ success

$
0
0

Today Gavriella Schuster, CVP of Microsoft’s Partner organization, spoke about our longstanding commitment to partners, and new investments to enable partners to accelerate customer success.

As we shared in our recent earnings, Azure is growing at 76 percent, driven by a combination of continued innovation, strong customer adoption across industries and a global ecosystem of talented partners. I’m inspired by partners such as Finastra, Cognata, ABB, and Egress who are working with Azure to enable digital transformation within their respective industries.

While Microsoft has long been a partner-oriented organization, some things are different with cloud. Specifically, partners need Microsoft to be more than just a great technology provider, you need us to be a trusted business partner. This requires long-term commitment and the ability to continually adapt and innovate as the market shifts. This has been, and continues to be, our commitment. Our partnership philosophy is grounded in the foundation that we can only deliver on our mission if there is a strong and successful ecosystem around us.

In the spirit of being a trusted business partner, I wanted to highlight our key partner-oriented investments and some of the resources to help our partners successfully grow their businesses.  

Committed to growing our partners’ cloud businesses

Unlock new growth opportunities. Microsoft has sales organizations in 120 countries around the world. Our comprehensive partner co-selling program allows partners to tap into our global network to expose their solutions and services to new markets and new opportunities. Microsoft sales people are paid to bring the best solutions to our customers, spanning both Microsoft and partner solutions.

The Azure Marketplace and AppSource digital storefronts enable customers to easily find, try, and buy the right solutions from our partners. In March, we will add new capabilities to our marketplaces that enable partners to publish to a single location and then merchandise to over 75 million Microsoft customers, thousands of Microsoft sales people, and tens of thousands of Microsoft partners with the click of a button. This new capability further enables partners in our Cloud Solution Provider (CSP) program to create comprehensive, tailored solutions for their end customers. And this is just the beginning. More innovations are on the way, and you can view what’s coming through our Marketplaces roadmap.

“Azure Marketplace has transformed Chef’s business because it has opened up brand new channels and a new lead generation.” – Michele Todd, Chef Software

Technical resources and support whenever and wherever you need it. Whether you’re getting acquainted with Azure, or are further along in developing your solution – there are resources to help you find the answers:

Cloud migration. I previously wrote about how we’re making it easy for customers to migrate their existing workloads to Azure. For our SI and managed services partners, the approaching SQL Server 2008 and Windows Server 2008 end of support also brings new opportunities to provide cloud migration, app modernization and ongoing app management services to customers. Just this migration opportunity alone creates over $50B in opportunity for our partners.

We’ve created the Cloud Migration and Modernization partner playbook and offer the Azure FastTrack program to help you connect with Microsoft engineers as you accelerate this practice. And available this week, new migration content will be launched on Digital Marketing Content OnDemand, a free benefit in MPN Go-to-Market Services.

An open, hybrid, and trusted platform to turn ideas into solutions faster

Build on a secure and trusted foundation. With GDPR and cybersecurity top of mind for customers, partners need a cloud partner that allows them to focus on building their solution, and not on performing security and privacy audits. Microsoft leads the industry in establishing clear security and privacy requirements and in consistently meeting these requirements. And to protect our partners’ cloud-based innovations and investments, we’ve created unique programs like the Microsoft Azure IP Advantage program which lets you leverage a portfolio of Microsoft’s patents to protect against IP infringement risks.

Flexibility to deliver hybrid cloud solutions. Azure has been developed for hybrid deployment from the ground up, providing partners the flexibility to build hybrid solutions for customers, using Windows and Linux.

Develop on any platform, with tools that you know and love. With Azure, partners can migrate existing apps to the cloud, implement Kubernetes-based architectures, or develop cloud-native apps using microservices and serverless technologies from Microsoft, our partners, and the open-source community.

New innovations to light up customer opportunities

Analytics and insights. Our customers’ hunger for better insights is creating great opportunities for partners. Azure enables customers to efficiently manage the end to end data analytics lifecycle. TimeXtender is helping customers speed up digital transformation by building platforms for operational data exchange (ODX) using Azure. Neal Analytics created an algorithm for retailers and consumer goods companies that makes inventory data actionable

AI. Azure provides a comprehensive set of flexible AI services, and a thoughtful and trusted approach to AI, so partners can create AI solutions quickly and with confidence. Talview is a pioneer in using artificial intelligence (AI) and cognitive technologies to analyze video interviews in multiple formats. 

“The Talview platform was previously hosted on Amazon Web Services (AWS), but we shifted to Azure because its AI capabilities were deeper and richer for our needs.” – Sanjoe Jose, CEO, Talview

Internet of Things. Partners use of Azure IoT has become a key differentiator. Willow is enabling its customer thyssenkrupp Elevators to drive building insights and improvements using Azure Digital Twins, that creates virtual representations of the physical world, allowing partners to develop contextually-aware solutions specific to their industries.

“Partnering with Microsoft gives us access to both the best technology platform for designing and developing innovative solutions for our clients, along with the best partner enablement organization in the industry.” – Matt Jackson, VP Services for Americas, Insight

We are thrilled to be on this journey together with you. And, if you’re new to Azure, I invite you to become an Azure partner today.

Best practices to consider before deploying a network virtual appliance

$
0
0

A network virtual appliance (NVA) is a virtual appliance primarily focused on network functions virtualization. A typical network virtual appliance involves various layers four to seven functions like firewall, WAN optimizer, application delivery controllers, routers, load balancers, IDS/IPS, proxies, SD-WAN edge, and more. While the public cloud may provide some of these functionalities natively, it is quite common to see customers deploying network virtual appliances from independent software vendors (ISV). These capabilities in the public cloud enable hybrid solutions and are generally available through the Azure Marketplace.

What exactly is the network virtual appliance in the cloud?

A network virtual appliance is often a full Linux virtual machine (VM) image consisting of a Linux kernel and includes user level applications and services. When a VM is created, it first boots the Linux kernel to initialize the system and then starts up any application or management services needed to make the network virtual appliance functional. The cloud provider is responsible for the compute resources, while the ISV provides the image that represents the software stack of the virtual appliance.

Similar to a standard Linux distribution, the Linux kernel is integral to the NVA’s image and is provided by the ISV often customized. The kernel itself includes the drivers needed for all network and disk devices available to the virtual machine. The version and customizations made to the NVA’s kernel will often impact the performance and functionality of the virtual machine, for more information about Linux and accelerated networking see our documentation, “Create a Linux virtual machine with Accelerated Networking.” As new networking enhancements are made to the Azure platform such as performance improvements or even entirely new networking features, the ISV may need to update the software image to provide support for those enhancements. Often, this entails updating their version of the Linux kernel from the upstream Linux project. For the latest updates, see the Linux Kernel Archives website.

All NVA images published in the Azure Marketplace go through rigorous testing and onboarding workflows. As part of Azure’s continuous integration and deployment life cycle, NVA images are deployed and tested in a pre-production environment for any regression or issues. ISVs are responsible for publishing deployment guidelines and GithHub published Azure Resource Manager (ARM) templates for their specific products. Technical and performance specifications of the appliance are owned by the ISVs, while Microsoft owns the technical and performance specifications of the host environment. Technical support for the customer’s virtual appliance, it’s features, recommended OS version, kernel version, and security updates are provided by the ISV.

Pricing for NVA solutions may vary based on product types and publisher specifications. Software license fees and Microsoft Azure usage costs are charged separately through the Azure subscription. Learn more by visiting our list of Marketplace FAQs related to virtual appliance and Azure marketplace.

Below is an example of a hybrid network that extends an on-premises network to Azure. Demilitarized zone (DMZ) represents a perimeter network between on-premises and Azure, which includes NVAs.

Flowchart example of a hybrid network that extends an on-premises network to Azure

Another example below shows an NVA with Azure Virtual WAN. For more details on how to steer traffic from a Virtual WAN hub to a network virtual appliance please visit our documentation, “Create Virtual Hub route table steer traffic to a Network Virtual Appliance.”

Flowchart example of NVA with Azure Virtual WAn

Common best practices

Microsoft continues to collaborate with multiple ISVs to improve cloud experience for Microsoft customers.

  • Azure accelerated networking support: Consider a virtual appliance that is available on one of the supported VM types with Azure’s accelerated networking capability. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization for use with the most demanding network workloads on supported VM types. Accelerated networking is supported on most general purpose and compute-optimized instance sizes with two or more vCPUs. For a list of supported OS and additional information visit our documentation, “Create a Windows virtual machine with Accelerated Networking.” 
  • Multi-NIC support: A network interface (NIC) is the interconnection between a VM and a virtual network (VNet). A VM must have at least one NIC, but can have more than one depending on the size of the VM you create. Learn about how many NICs each VM size supports for Windows and Linux in our documentation, “Sizes for Windows virtual machines in Azure” or “Sizes for Linux virtual machines in Azure.” Many network virtual appliances require multiple NICs. With multiple NICs you can better manage your network traffic by isolating various types of traffic across the different NICs. A good example would be separating data plane traffic from the management plane and hence the VM supporting at least two NICs. A VM can only have as many network interfaces attached to it as the VM size supports. If you are considering adding a NIC after deploying the NVA, be sure to Enable IP forwarding on the NIC. The setting disables Azure's check of the source and destination for a network interface. Learn more about how to enable IP forwarding for a network interface.
  • HA Port with Azure Load Balancer: Azure Standard Load Balancer helps you load-balance TCP and UDP flows on all ports simultaneously when you're using an internal load balancer. A high availability (HA) port load balancing rule is a variant of a load balancing rule, configured on an internal Standard Load Balancer. You would want your NVA to be reliable and highly available, to achieve these goals simply by adding NVA instances to the back-end pool of your internal load balancer and configuring a HA ports load-balancer rule. For more information on HA port overview please visit our documentation, “High availability ports overview.”

Flowchart example of HA Port with Azure Load Balancer

  • Support for Virtual Machine Scale Sets (VMSS): Azure Virtual Machine Scale Sets let you create and manage a group of identical, load balanced VMs. The number of VM instances can automatically increase or decrease in response to a demand or a defined schedule. Scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update a large number of VMs. Scale sets are built from virtual machines. With scale sets, the management and automation layers are provided to run and scale your applications. For more information visit our documentation, “What are virtual machine scale sets.”

As enterprises move ever demanding mission-critical workloads to the cloud, it is important to consider comprehensive networking services that are easy to deploy, manage, scale, and monitor. We are fully committed to providing you the best network virtual appliance experience that can provide all the benefits of cloud in conjunction with your network needs. Picking a virtual appliance can be an important decision when you are designing your network. We want to ensure you do so for ease of use, scale, and a better future together.

Additional links

Build your own deep learning models on Azure Data Science Virtual Machines

$
0
0

As a modern developer, you may be eager to build your own deep learning models but aren’t quite sure where to start. If this is you, I recommend you take a look at the deep learning course from fast.ai. This new fast.ai course helps software developers start building their own state-of-the-art deep learning models. Developers who complete this fast.ai course will become proficient in deep learning techniques in multiple domains including computer vision, natural language processing, recommender algorithms, and tabular data.

Fast.ai top banner

You’ll also want to learn about Microsoft’s Azure Data Science Virtual Machine (DSVM). Azure DSVM empowers developers like you with the tools you need to be productive with this fast.ai course today on Azure, with virtually no setup required. Using fast cloud-based GPU virtual machines (VMs), at the most competitive rates, Azure DSVM saves you time that would otherwise be spent in installation, configuration, and waiting for deep learning models to train.

Here is how you can effectively run the fast.ai course examples on Azure.

Running the fast.ai deep learning course on Azure DSVM

While there are several ways in which you can use Azure for your deep learning course, one of the easiest ways is to leverage Azure Data Science Virtual Machine (DSVM). Azure DSVM is a family of virtual machine (VM) images that are pre-configured with a rich curated set of tools and frameworks for data science, deep learning, and machine learning.

Using Azure DSVM, you can utilize tools like Jupyter notebooks and necessary drivers to run on powerful GPUs. In result saving time that would otherwise be spent installing, configuring, and troubleshooting any compatibility issues on your system. Azure DSVM is offered on both Linux and Windows editions. Azure VMs provides a neat extension mechanism that the DSVM can leverage, allowing you to automatically configure your VM to your needs.

Microsoft provides an extension to the DSVM specifically for the fast.ai course, making the process so simple you can answer a couple of questions and get your own instance of DSVM provisioned in a few minutes. The fast.ai extension installs all the necessary libraries you need to run the course Jupyter notebooks and also pull down the latest course notebooks from the fast.ai GitHub repository. So in a very short time, you’ll be ready to start running your course samples.

Getting started with Azure DSVM and fast.ai

Here’s how simple it is to get started:

1. Sign in or sign up for an Azure subscription

If you don’t have an Azure subscription you can start off with a free trial subscription to explore any Azure service for 30 days and access to a set of popular services free for 12 months. Please note that free trial subscriptions do not give access to GPU resources. For GPU access, you need to sign up for an Azure pay-as-you-go subscription or use the Azure credits from the Visual Studio subscriptions if you have one. Once you have created your subscription, you can login to the Azure portal.

2. Create a DSVM instance with fast.ai extension

You can now create a DSVM with the fast.ai extension by selecting one of the links below. Choose one depending on whether you prefer a Windows or a Linux environment for your course.

After answering a few simple questions in the deployment form, your VM is created in about five to 10 minutes and is pre-configured with everything you need to run the fast.ai course. While creating the DSVM, you can choose between a GPU-based or a CPU-only instance of the DSVM. A GPU instance will drastically cut down execution times when training deep learning models. This is largely what the course notebooks covers, so I recommend a GPU instance. Azure also offers low-priority instances including GPU at a significant discount which is as much as 80 percent on compute usage charges compared to standard instances. Though keep in mind, they can be preempted and deallocated from your subscription at any time depending on factors like the demand for these resources. If you want to take advantage of the deep discount, you can create preemptable Linux DSVM instance with the fast.ai extension.

3. Run your course notebooks

Once you have created your DSVM instance, you can immediately start using it to run all the code in the course examples by accessing Jupyter and the course notebooks that are preloaded in the DSVM.

Fast.ai notebook screenshot

You can find more information on how to get started with fast.ai for Azure on the course documentation page.

Next steps

You can continue your journey in machine learning and data science by taking a look at the Azure Machine Learning service which enables you to track your experiments. You can also use automated machine learning, build custom models, and deploy machine learning, deep learning models, or pipelines in production at scale with several sample notebooks that are pre-built in the DSVM. You can also find additional learning resources on Microsoft’s AI School and LearnAnalytics.

I look forward to your feedback and questions on the fast.ai forums or on Stack Overflow.

Submit to the Applied F# Challenge!

$
0
0

This post was written by Lena Hall, a Senior Cloud Developer Advocate at Microsoft.

F# Software Foundation has recently announced their new initiative — Applied F# Challenge! We encourage you to participate and send your submissions about F# on Azure through the participation form.

Applied F# Challenge is a new initiative to encourage in-depth educational submissions to reveal more of the interesting, unique, and advanced applications of F#.

The motivation for the challenge is uncovering more of advanced and innovative scenarios and applications of F# we hear about less often:

We primarily hear about people using F# for web development, analytical programming, and scripting. While those are perfect use cases for F#, there are many more brilliant and less covered scenarios where F# has demonstrated its strength. For example, F# is used in quantum computing, cancer research, bioinformatics, IoT, and other domains that are not typically mentioned as often.

You have some time to think about the topic for your submission because the challenge is open from February 1 to May 20 this year.

What should you submit?

Publish a new article or an example code project that covers a use case of a scenario where you feel the use of F# to be essential or unique. The full eligibility criteria and frequently asked questions are listed in the official announcement.

There are multiple challenge categories you can choose to write about:

F# for machine learning and data science.
F# for distributed systems.
F# in the cloud: web, serverless, containers, etc.
F# for desktop and mobile development.
F# in your organization or domain: healthcare, finance, games, retail, etc.
F# and open-source development.
F# for IoT or hardware programming.
F# in research: quantum, bioinformatics, security, etc.
Out of the box F# topics, scenarios, applications, or examples.

Why should you participate in the challenge?

All submissions will receive F# stickers as a participation reward for contributing to the efforts of improving the F# ecosystem and raising awareness of F# strengths in advanced or uncovered use cases.

Participants with winning submissions in each category will also receive the title of a Recognized F# Expert by F# Software Foundation and a special non-monetary prize.

Each challenge category will be judged by the committees that include many notable F# experts and community leaders, including Don Syme, Rachel Blasucci, Evelina Gabasova, Henrik Feldt, Tomas Perticek, and many more.

As the participation form suggests, you will also have an opportunity to be included in a recommended speaker list by F# Software Foundation.

Spread the word

Help us spread the word about the Applied F# Challenge by encouraging others to participate with #AppliedFSharpChallenge hashtag on Twitter!

Accelerating the ISV Opportunity

$
0
0

Microsoft has a rich history as a platform company that focuses on the creation of healthy ecosystems based on partnering with hardware and software companies in ways that provide for mutual success.  Having spent close to 10 years leading our efforts across our hardware (OEM) and software (ISV) ecosystems, I know first-hand how hard we work to make partners successful leveraging Windows, Azure, and the other Microsoft products and how this work continues today through both our OEM and One Commercial Partner (OCP) organizations.

As we look forward, we want to build on the work we’ve done over the last 5+ years working with the software as a service (SaaS) and line of business (LOB) ISV ecosystem around Azure and Office 365 to extend this work to do more with Dynamics 365 and the Power platform. As we transition the older Dynamics offerings from a series of monolithic applications to a set of SaaS offerings built on a uniform application platform, it presents a new opportunity for us to partner with the ISV community both on the product/platform capabilities and on our go to market (GTM) efforts.

First some background. Office, Dynamics and Windows went through multi-year journeys to transition from on-premises offerings to SaaS offerings. James Phillips recently wrote about the digital feedback loop from the Power platform and how that has empowered people to change their organization with software that would not have previously been possible for them. The insights derived from Power BI are made actionable by applications created by PowerApps and then Flow simplifies robotic process automation (RPA) driving feedback to Power BI and continuing the digital feedback loop. At the core of these platform assets is data. To maintain a consistent data model, the Power platform supports the common data model (CDM) where you can leverage or define a consistent data scheme for entities. By defining it early you can then create producers and consumers of data and have them work together on the same data set making the whole more valuable than the individual applications. For LOB ISVs this creates an opportunity for building on top of this data asset, connecting additional data sources or embedding it in other applications. Furthermore, Microsoft products historically had their own extensibility model so that someone wanting to extend Office would have to work with a different extensibility model than someone extending Dynamics 365. Satya Nadella recently talked to the press about how the Power platform is the extensibility model for Microsoft 365 and Dynamics 365 along with being able to integrate with 3rd party tools. Looking through the April 2019 release notes shows what we are working on for our next release. I recommend that ISVs look at the notes to see the upcoming updates to identify new opportunities for their business.

As an example of how ISVs are starting to leverage the Power platform and Dynamics 365 we can look at  Indegene a partner that develops solutions for healthcare and pharmaceutical enterprises globally. They combined Dynamics 365, Power platform and Azure to bring a new customer engagement solution to the life sciences/healthcare industry by extending the CDM for life sciences sales, marketing and account management; added numerous commonly used business processes including a mobility component; and embedded descriptive as well as prescriptive intelligence capabilities that a life sciences manufacturer would typically not have as a part of their core solution. Taking advantage of the broader Azure platform they have included services for AI (natural language understanding, speech, graph database and more) which work with the data model and business logic in Dynamics 365 and the Power platform. With minimal development Indegene was able to create intelligent assistants and customer level personalization in life sciences. Without these Microsoft capabilities, this type of enterprise solution would have taken Indegene many years to develop. As Sanjay Virmani, EVP at Indegene recently said, “as customer engagement with healthcare providers and patients is moving towards becoming more intelligent than merely transactional, we saw a significant opportunity to disrupt the space and having a full toolbox between Dynamics 365 + Power platform + Azure has been the key to our substantive progress.”

We recognize that technology is only part of what makes ISVs successful, it is important that the business side is equally as robust. In support of this we are complementing the ongoing engineering work with an additional focus on alignment with our OCP, Azure and field teams to create the right GTM motions necessary for ISVs. Changes in the underlying technology often leads to taking a first principles approach to creating new software and we are doing the same exercise for our tools and resources for ISVs. I think of this in terms of key focus areas that include:

Developer Tools – The revamping of our self-service tools for developers to learn about and leverage the Dynamics 365 solutions and the Power platform. We have a lot of work to do here in conjunction with our April release cycle, but know we are focusing on it. This also includes our work on accelerators, development centers, certification and all the other support tools/services we can provided ISVs.

AppSource – We will align all our marketplace efforts with the work the Azure team is doing on Azure Marketplace and our joint work on a modern commercial marketplace for the benefit of both our AppSource customers and for ISVs. This includes both the technical and marketing benefits for both AppSource and Azure Marketplace. Being able to publish once to merchandize across storefronts to all Microsoft’s customers, sellers and partners will open new growth opportunities to most ISVs.

GTM Alignment – We have the opportunity to work with the OCP team on the GTM work we’ve been doing with Azure across both marketing tools and co-sell motions. Aligning our efforts in this area will bring the Dynamics 365/LOB ecosystem support in line with our Azure GTM efforts.

New Offerings – Along with alignment with the company’s current GTM efforts, are there things we can learn from other companies in the industry and bring them forward to our partners. Given all the changes we are working on, we are using this time to look at what we can learn from others that could be applied to our ecosystem.

So, as we look toward to the April release and our new fiscal year, know that the teams are focused on ISV support for the evolution of Dynamics 365 and the Power platform, alignment with the Azure AppSource efforts, coordinating with other Microsoft GTM motions and considering new best practices. We are taking feedback from ISVs to make sure that the marketplace has the benefits and GTM strategies that they need to maximize success in addition to looking at the platform to make sure there is enough surface area for them to be successful. You can hear some more of my thoughts on supporting ISVs as part of a broader conversation on the Steve Mordue podcast. In February I will be attending Business Forward in Paris and Mobile World Congress in Barcelona if you would like to meet about your ISV.

Cheers,

Guggs

Blazor 0.8.0 experimental release now available

$
0
0

Blazor 0.8.0 is now available! This release updates Blazor to use Razor Components in .NET Core 3.0 and adds some critical bug fixes.

Get Blazor 0.8.0

To get started with Blazor 0.8.0 install the following:

  1. .NET Core 3.0 Preview 2 SDK (3.0.100-preview-010184)
  2. Visual Studio 2019 (Preview 2 or later) with the ASP.NET and web development workload selected.
  3. The latest Blazor extension from the Visual Studio Marketplace.
  4. The Blazor templates on the command-line:

    dotnet new -i Microsoft.AspNetCore.Blazor.Templates::0.8.0-preview-19104-04
    

You can find getting started instructions, docs, and tutorials for Blazor at https://blazor.net.

Upgrade to Blazor 0.8.0

To upgrade your existing Blazor apps to Blazor 0.8.0 first make sure you've installed the prerequisites listed above.

To upgrade a standalone Blazor 0.7.0 project to 0.8.0:

  • Update the Blazor packages and .NET CLI tool references to 0.8.0-preview-19104-04.
  • Replace any package reference to Microsoft.AspNetCore.Blazor.Browser with a reference to Microsoft.AspNetCore.Blazor.
  • Replace BlazorComponent with ComponentBase.
  • Update overrides of SetParameters on components to override SetParametersAsync instead.
  • Replace BlazorLayoutComponent with LayoutComponentBase
  • Replace IBlazorApplicationBuilder with IComponentsApplicationBuilder.
  • Replace any using statements for Microsoft.AspNetCore.Blazor.* with Microsoft.AspNetCore.Components.*, except leave Microsoft.AspNetCore.Blazor.Hosting in Program.cs
  • In index.html update the script reference to reference components.webassembly.js instead of blazor.webassembly.js

To upgrade an ASP.NET Core hosted Blazor app to 0.8.0:

  • Update the client-side Blazor project as described previously.
  • Update the ASP.NET Core app hosting the Blazor app to .NET Core 3.0 by following the migrations steps in the ASP.NET Core docs.
    • Update the target framework to be netcoreapp3.0
    • Remove any package reference to Microsoft.AspNetCore.App or Microsoft.AspNetCore.All
    • Upgrade any non-Blazor Microsoft.AspNetCore.* package references to version 3.0.0-preview-19075-0444
    • Remove any package reference to Microsoft.AspNetCore.Razor.Design
    • To enable JSON support, add a package reference to Microsoft.AspNetCore.Mvc.NewtonsoftJson and updateStartup.ConfigureServices to call services.AddMvc().AddNewtonsoftJson()
  • Upgrade the Microsoft.AspNetCore.Blazor.Server package reference to 0.8.0-preview-19104-04
  • Add a package reference to Microsoft.AspNetCore.Components.Server
  • In Startup.ConfigureServices simplify any call to app.AddResponseCompression to call the default overload without specifying WebAssembly or binary data as additional MIME types to compress.
  • In Startup.Configure add a call to app.UseBlazorDebugging() after the existing call to app.UseBlazor<App.Startup>()
  • Remove any unnecessary use of the Microsoft.AspNetCore.Blazor.Server namespace.

To upgrade a Blazor class library to 0.8.0:

  • Replace the package references to Microsoft.AspNetCore.Blazor.Browser and Microsoft.AspNetCore.Blazor.Build with references to Microsoft.AspNetCore.Components.Browser and Microsoft.AspNetCore.Components.Build and update the versions to 3.0.0-preview-19075-0444.

Server-side Blazor is now ASP.NET Core Razor Components in .NET Core 3.0

As was recently announced, server-side Blazor is now shipping as ASP.NET Core Razor Components in .NET Core 3.0. We've integrated the Blazor component model into ASP.NET Core 3.0 and renamed it to Razor Components. Blazor 0.8.0 is now built on Razor Components and enables you to host Razor Components in the browser on WebAssembly.

Upgrade a server-side Blazor project to ASP.NET Core Razor Components in .NET Core 3.0

If you've been working with server-side Blazor, we recommend upgrading to use ASP.NET Core Razor Components in .NET Core 3.0.

To upgrade a server-side Blazor app to ASP.NET Core Razor Components:

  • Update the client-side Blazor project as described previously, except replace the script reference to blazor.server.js with components.server.js
  • Update the ASP.NET Core app hosting the Razor Components to .NET Core 3.0 as described previously.
  • In the server project:
    • Upgrade the Microsoft.AspNetCore.Blazor.Server package reference to 0.8.0-preview-19104-04
    • Add a package reference to Microsoft.AspNetCore.Components.Server version 3.0.0-preview-19075-0444
    • Replace the using statement for Microsoft.AspNetCore.Blazor.Server with Microsoft.AspNetCore.Components.Server
    • Replace services.AddServerSideBlazor with services.AddRazorComponents and app.UseServerSideBlazor with app.UseRazorComponents.
    • In the Startup.Configure method add app.UseStaticFiles() just prior to calling app.UseRazorComponents.
    • Move the wwwroot folder from the Blazor app project to the ASP.NET Core server project

Switching between ASP.NET Core Razor Components and client-side Blazor

Sometimes it's convenient to be able to switch between running your Razor Components on the server (ASP.NET Core Razor Components) and on the client (Blazor). For example, you might run on the server during development so that you can easily debug, but then publish your app to run on the client.

To update an ASP.NET Core hosted Blazor app so that it can be run as an ASP.NET Core Razor Components app:

  • Move the wwwroot folder from the client-side Blazor project to the ASP.NET Core server project.
  • In the server project:
    • Update the script tag in index.html to point to components.server.js instead of components.webassembly.js.
    • Add a call to services.AddRazorComponents<Client.Startup>() in the Startup.ConfigureServices method.
    • Add a call to app.UseStaticFiles() in the Startup.Configure method prior to the call to UseMvc.
    • Replace the call to UseBlazor with app.UseRazorComponents<Client.Startup>()
  • If you're using dependency injection to inject an HttpClient into your components, then you'll need to add an HttpClient as a service in your server's Startup.ConfigureServices method.

Tooling for Blazor projects is now included with Visual Studio 2019

Previously to get tooling support for Blazor projects you needed to install the Blazor extension for Visual Studio. Starting with Visual Studio 2019 Preview 2, tooling support for Razor Components (and hence Blazor apps) is already included without having to install anything else. The Blazor extension is now only needed to install the Blazor project templates in Visual Studio.

Runtime improvements

Blazor 0.8.0 includes some .NET runtime improvements like improved runtime performance on Chrome and an improved IL linker. In our performance benchmarks, Blazor 0.8.0 performance on Chrome is now about 25% faster. You can now also reference existing libraries like Json.NET from a Blazor app without any additional linker configuration:

@functions {
    WeatherForecast[] forecasts;

    protected override async Task OnInitAsync()
    {
        var json = await Http.GetStringAsync("api/SampleData/WeatherForecasts");
        forecasts = Newtonsoft.Json.JsonConvert.DeserializeObject<WeatherForecast[]>(json);
    }
}

Known issues

There are a couple of known issues with this release that you may run into:

  • "It was not possible to find any compatible framework version. The specified framework 'Microsoft.NETCore.App', version '2.0.0' was not found.": You may see this error when building a Blazor app because the IL linker currently requires .NET Core 2.x to run. To work around this issue, either install .NET Core 2.2 or disable IL linking by setting the <BlazorLinkOnBuild>false</BlazorLinkOnBuild> property in your project file.
  • "Unable to generate deps.json, it may have been already generated.": You may see this error when running a standalone Blazor app and you haven't yet restored packages for any .NET Core apps. To workaround this issue create any .NET Core app (ex dotnet new console) and then rerun the Blazor app.

These issues will be addressed in a future Blazor update.

Future updates

This release of Blazor was primarily focused on first integrating Razor Components into ASP.NET Core 3.0 and then rebuilding Blazor on top of that. Going forward, we plan to ship Blazor updates with each .NET Core 3.0 update.

Blazor, and support for running Razor Components on WebAssembly in the browser, won't ship with .NET Core 3.0, but we continue to work towards shipping Blazor some later date.

Give feedback

We hope you enjoy this latest preview release of Blazor. As with previous releases, your feedback is important to us. If you run into issues or have questions while trying out Blazor, file issues on GitHub. You can also chat with us and the Blazor community on Gitter if you get stuck or to share how Blazor is working for you. After you've tried out Blazor for a while please let us know what you think by taking our in-product survey. Click the survey link shown on the app home page when running one of the Blazor project templates:

Blazor survey

Thanks for trying out Blazor!


Windows 10 SDK Preview Build 18327 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18327 or greater). The Preview SDK Build 18327 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from the developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum.  For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017.  You can install this SDK and still also continue to submit your apps that target Windows 10 build 1809 or earlier to the store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
  • This build of the Windows SDK will install ONLY on Windows 10 Insider Preview builds.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following URL:  https://go.microsoft.com/fwlink/?prd=11966&pver=1.0&plcid=0x409&clcid=0x409&ar=Flight&sar=Sdsurl&o1=18327 once the static URL is published.

Tools Updates

Message Compiler (mc.exe)

  • The “-mof” switch (to generate XP-compatible ETW helpers) is considered to be deprecated and will be removed in a future version of mc.exe. Removing this switch will cause the generated ETW helpers to expect Vista or later.
  • The “-A” switch (to generate .BIN files using ANSI encoding instead of Unicode) is considered to be deprecated and will be removed in a future version of mc.exe. Removing this switch will cause the generated .BIN files to use Unicode string encoding.
  • The behavior of the “-A” switch has changed. Prior to Windows 1607 Anniversary Update SDK, when using the -A switch, BIN files were encoded using the build system’s ANSI code page. In the Windows 1607 Anniversary Update SDK, mc.exe’s behavior was inadvertently changed to encode BIN files using the build system’s OEM code page. In the 19H1 SDK, mc.exe’s previous behavior has been restored and it now encodes BIN files using the build system’s ANSI code page. Note that the -A switch is deprecated, as ANSI-encoded BIN files do not provide a consistent user experience in multi-lingual systems.

Breaking Changes

Change to effect graph of the AcrylicBrush

In this Preview SDK we’ll be adding a blend mode to the effect graph of the AcrylicBrush called Luminosity. This blend mode will ensure that shadows do not appear behind acrylic surfaces without a cutout. We will also be exposing a LuminosityBlendOpacity API available for tweaking that allows for more AcrylicBrush customization.

By default, for those that have not specified any LuminosityBlendOpacity on their AcrylicBrushes, we have implemented some logic to ensure that the Acrylic will look as similar as it can to current 1809 acrylics. Please note that we will be updating our default brushes to account for this recipe change.

TraceLoggingProvider.h  / TraceLoggingWrite

Events generated by TraceLoggingProvider.h (e.g. via TraceLoggingWrite macros) will now always have Id and Version set to 0.

Previously, TraceLoggingProvider.h would assign IDs to events at link time. These IDs were unique within a DLL or EXE, but changed from build to build and from module to module.

API Updates, Additions and Removals

Note: There have been no changes to list since the last flighted build 10.0.18323.0

Additions:

 

namespace Windows.AI.MachineLearning {
  public sealed class LearningModelSession : IClosable {
    public LearningModelSession(LearningModel model, LearningModelDevice deviceToRunOn, LearningModelSessionOptions learningModelSessionOptions);
  }
  public sealed class LearningModelSessionOptions
  public sealed class TensorBoolean : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorBoolean CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorBoolean CreateFromShapeArrayAndDataArray(long[] shape, bool[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorDouble : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorDouble CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorDouble CreateFromShapeArrayAndDataArray(long[] shape, double[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorFloat : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorFloat CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorFloat CreateFromShapeArrayAndDataArray(long[] shape, float[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorFloat16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorFloat16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorFloat16Bit CreateFromShapeArrayAndDataArray(long[] shape, float[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt16Bit CreateFromShapeArrayAndDataArray(long[] shape, short[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt32Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt32Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt32Bit CreateFromShapeArrayAndDataArray(long[] shape, int[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt64Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt64Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt64Bit CreateFromShapeArrayAndDataArray(long[] shape, long[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt8Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt8Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt8Bit CreateFromShapeArrayAndDataArray(long[] shape, byte[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorString : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorString CreateFromShapeArrayAndDataArray(long[] shape, string[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt16Bit CreateFromShapeArrayAndDataArray(long[] shape, ushort[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt32Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt32Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt32Bit CreateFromShapeArrayAndDataArray(long[] shape, uint[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt64Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt64Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt64Bit CreateFromShapeArrayAndDataArray(long[] shape, ulong[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt8Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt8Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt8Bit CreateFromShapeArrayAndDataArray(long[] shape, byte[] data);
    IMemoryBufferReference CreateReference();
  }
}
namespace Windows.ApplicationModel {
  public sealed class Package {
    StorageFolder EffectiveLocation { get; }
    StorageFolder MutableLocation { get; }
  }
}
namespace Windows.ApplicationModel.AppService {
  public sealed class AppServiceConnection : IClosable {
    public static IAsyncOperation<StatelessAppServiceResponse> SendStatelessMessageAsync(AppServiceConnection connection, RemoteSystemConnectionRequest connectionRequest, ValueSet message);
  }
  public sealed class AppServiceTriggerDetails {
    string CallerRemoteConnectionToken { get; }
  }
  public sealed class StatelessAppServiceResponse
  public enum StatelessAppServiceResponseStatus
}
namespace Windows.ApplicationModel.Background {
  public sealed class ConversationalAgentTrigger : IBackgroundTrigger
}
namespace Windows.ApplicationModel.Calls {
  public sealed class PhoneLine {
    string TransportDeviceId { get; }
    void EnableTextReply(bool value);
  }
  public enum PhoneLineTransport {
    Bluetooth = 2,
  }
  public sealed class PhoneLineTransportDevice
}
namespace Windows.ApplicationModel.Calls.Background {
  public enum PhoneIncomingCallDismissedReason
  public sealed class PhoneIncomingCallDismissedTriggerDetails
  public enum PhoneTriggerType {
    IncomingCallDismissed = 6,
  }
}
namespace Windows.ApplicationModel.Calls.Provider {
  public static class PhoneCallOriginManager {
    public static bool IsSupported { get; }
  }
}
namespace Windows.ApplicationModel.ConversationalAgent {
  public sealed class ConversationalAgentSession : IClosable
  public sealed class ConversationalAgentSessionInterruptedEventArgs
  public enum ConversationalAgentSessionUpdateResponse
  public sealed class ConversationalAgentSignal
  public sealed class ConversationalAgentSignalDetectedEventArgs
  public enum ConversationalAgentState
  public sealed class ConversationalAgentSystemStateChangedEventArgs
  public enum ConversationalAgentSystemStateChangeType
}
namespace Windows.ApplicationModel.Preview.Holographic {
  public sealed class HolographicKeyboardPlacementOverridePreview
}
namespace Windows.ApplicationModel.Resources {
  public sealed class ResourceLoader {
    public static ResourceLoader GetForUIContext(UIContext context);
  }
}
namespace Windows.ApplicationModel.Resources.Core {
  public sealed class ResourceCandidate {
    ResourceCandidateKind Kind { get; }
  }
  public enum ResourceCandidateKind
  public sealed class ResourceContext {
    public static ResourceContext GetForUIContext(UIContext context);
  }
}
namespace Windows.ApplicationModel.UserActivities {
  public sealed class UserActivityChannel {
    public static UserActivityChannel GetForUser(User user);
  }
}
namespace Windows.Devices.Bluetooth.GenericAttributeProfile {
  public enum GattServiceProviderAdvertisementStatus {
    StartedWithoutAllAdvertisementData = 4,
  }
  public sealed class GattServiceProviderAdvertisingParameters {
    IBuffer ServiceData { get; set; }
  }
}
namespace Windows.Devices.Enumeration {
  public enum DevicePairingKinds : uint {
    ProvidePasswordCredential = (uint)16,
  }
  public sealed class DevicePairingRequestedEventArgs {
    void AcceptWithPasswordCredential(PasswordCredential passwordCredential);
  }
}
namespace Windows.Devices.Input {
  public sealed class PenDevice
}
namespace Windows.Devices.PointOfService {
  public sealed class JournalPrinterCapabilities : ICommonPosPrintStationCapabilities {
    bool IsReversePaperFeedByLineSupported { get; }
    bool IsReversePaperFeedByMapModeUnitSupported { get; }
    bool IsReverseVideoSupported { get; }
    bool IsStrikethroughSupported { get; }
    bool IsSubscriptSupported { get; }
    bool IsSuperscriptSupported { get; }
  }
  public sealed class JournalPrintJob : IPosPrinterJob {
    void FeedPaperByLine(int lineCount);
    void FeedPaperByMapModeUnit(int distance);
    void Print(string data, PosPrinterPrintOptions printOptions);
  }
  public sealed class PosPrinter : IClosable {
    IVectorView<uint> SupportedBarcodeSymbologies { get; }
    PosPrinterFontProperty GetFontProperty(string typeface);
  }
  public sealed class PosPrinterFontProperty
  public sealed class PosPrinterPrintOptions
  public sealed class ReceiptPrinterCapabilities : ICommonPosPrintStationCapabilities, ICommonReceiptSlipCapabilities {
    bool IsReversePaperFeedByLineSupported { get; }
    bool IsReversePaperFeedByMapModeUnitSupported { get; }
    bool IsReverseVideoSupported { get; }
    bool IsStrikethroughSupported { get; }
    bool IsSubscriptSupported { get; }
    bool IsSuperscriptSupported { get; }
  }
  public sealed class ReceiptPrintJob : IPosPrinterJob, IReceiptOrSlipJob {
    void FeedPaperByLine(int lineCount);
    void FeedPaperByMapModeUnit(int distance);
    void Print(string data, PosPrinterPrintOptions printOptions);
    void StampPaper();
  }
  public struct SizeUInt32
  public sealed class SlipPrinterCapabilities : ICommonPosPrintStationCapabilities, ICommonReceiptSlipCapabilities {
    bool IsReversePaperFeedByLineSupported { get; }
    bool IsReversePaperFeedByMapModeUnitSupported { get; }
    bool IsReverseVideoSupported { get; }
    bool IsStrikethroughSupported { get; }
    bool IsSubscriptSupported { get; }
    bool IsSuperscriptSupported { get; }
  }
  public sealed class SlipPrintJob : IPosPrinterJob, IReceiptOrSlipJob {
    void FeedPaperByLine(int lineCount);
    void FeedPaperByMapModeUnit(int distance);
    void Print(string data, PosPrinterPrintOptions printOptions);
  }
}
namespace Windows.Globalization {
  public sealed class CurrencyAmount
}
namespace Windows.Graphics.DirectX {
  public enum DirectXPrimitiveTopology
}
namespace Windows.Graphics.Holographic {
  public sealed class HolographicCamera {
    HolographicViewConfiguration ViewConfiguration { get; }
  }
  public sealed class HolographicDisplay {
    HolographicViewConfiguration TryGetViewConfiguration(HolographicViewConfigurationKind kind);
  }
  public sealed class HolographicViewConfiguration
  public enum HolographicViewConfigurationKind
}
namespace Windows.Management.Deployment {
  public enum AddPackageByAppInstallerOptions : uint {
    LimitToExistingPackages = (uint)512,
  }
  public enum DeploymentOptions : uint {
    RetainFilesOnFailure = (uint)2097152,
  }
}
namespace Windows.Media.Devices {
  public sealed class InfraredTorchControl
  public enum InfraredTorchMode
  public sealed class VideoDeviceController : IMediaDeviceController {
    InfraredTorchControl InfraredTorchControl { get; }
  }
}
namespace Windows.Media.Miracast {
  public sealed class MiracastReceiver
  public sealed class MiracastReceiverApplySettingsResult
  public enum MiracastReceiverApplySettingsStatus
  public enum MiracastReceiverAuthorizationMethod
  public sealed class MiracastReceiverConnection : IClosable
  public sealed class MiracastReceiverConnectionCreatedEventArgs
  public sealed class MiracastReceiverCursorImageChannel
  public sealed class MiracastReceiverCursorImageChannelSettings
  public sealed class MiracastReceiverDisconnectedEventArgs
  public enum MiracastReceiverDisconnectReason
  public sealed class MiracastReceiverGameControllerDevice
  public enum MiracastReceiverGameControllerDeviceUsageMode
  public sealed class MiracastReceiverInputDevices
  public sealed class MiracastReceiverKeyboardDevice
  public enum MiracastReceiverListeningStatus
  public sealed class MiracastReceiverMediaSourceCreatedEventArgs
  public sealed class MiracastReceiverSession : IClosable
  public sealed class MiracastReceiverSessionStartResult
  public enum MiracastReceiverSessionStartStatus
  public sealed class MiracastReceiverSettings
  public sealed class MiracastReceiverStatus
  public sealed class MiracastReceiverStreamControl
  public sealed class MiracastReceiverVideoStreamSettings
  public enum MiracastReceiverWiFiStatus
  public sealed class MiracastTransmitter
  public enum MiracastTransmitterAuthorizationStatus
}
namespace Windows.Networking.Connectivity {
  public enum NetworkAuthenticationType {
    Wpa3 = 10,
    Wpa3Sae = 11,
  }
}
namespace Windows.Networking.NetworkOperators {
  public sealed class ESim {
    ESimDiscoverResult Discover();
    ESimDiscoverResult Discover(string serverAddress, string matchingId);
    IAsyncOperation<ESimDiscoverResult> DiscoverAsync();
    IAsyncOperation<ESimDiscoverResult> DiscoverAsync(string serverAddress, string matchingId);
  }
  public sealed class ESimDiscoverEvent
  public sealed class ESimDiscoverResult
  public enum ESimDiscoverResultKind
}
namespace Windows.Networking.PushNotifications {
  public static class PushNotificationChannelManager {
    public static event EventHandler<PushNotificationChannelsRevokedEventArgs> ChannelsRevoked;
  }
  public sealed class PushNotificationChannelsRevokedEventArgs
}
namespace Windows.Perception.People {
  public sealed class EyesPose
  public enum HandJointKind
  public sealed class HandMeshObserver
  public struct HandMeshVertex
  public sealed class HandMeshVertexState
  public sealed class HandPose
  public struct JointPose
  public enum JointPoseAccuracy
}
namespace Windows.Perception.Spatial {
  public struct SpatialRay
}
namespace Windows.Perception.Spatial.Preview {
  public sealed class SpatialGraphInteropFrameOfReferencePreview
  public static class SpatialGraphInteropPreview {
    public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem);
    public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem, Vector3 relativePosition);
    public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem, Vector3 relativePosition, Quaternion relativeOrientation);
  }
}
namespace Windows.Security.Authorization.AppCapabilityAccess {
  public sealed class AppCapability
  public sealed class AppCapabilityAccessChangedEventArgs
  public enum AppCapabilityAccessStatus
}
namespace Windows.Security.DataProtection {
  public enum UserDataAvailability
  public sealed class UserDataAvailabilityStateChangedEventArgs
  public sealed class UserDataBufferUnprotectResult
  public enum UserDataBufferUnprotectStatus
  public sealed class UserDataProtectionManager
  public sealed class UserDataStorageItemProtectionInfo
  public enum UserDataStorageItemProtectionStatus
}
namespace Windows.Storage.AccessCache {
  public static class StorageApplicationPermissions {
    public static StorageItemAccessList GetFutureAccessListForUser(User user);
    public static StorageItemMostRecentlyUsedList GetMostRecentlyUsedListForUser(User user);
  }
}
namespace Windows.Storage.Pickers {
  public sealed class FileOpenPicker {
    User User { get; }
    public static FileOpenPicker CreateForUser(User user);
  }
  public sealed class FileSavePicker {
    User User { get; }
    public static FileSavePicker CreateForUser(User user);
  }
  public sealed class FolderPicker {
    User User { get; }
    public static FolderPicker CreateForUser(User user);
  }
}
namespace Windows.System {
  public sealed class DispatcherQueue {
    bool HasThreadAccess { get; }
  }
  public enum ProcessorArchitecture {
    Arm64 = 12,
    X86OnArm64 = 14,
  }
}
namespace Windows.System.Profile {
  public static class AppApplicability
  public sealed class UnsupportedAppRequirement
  public enum UnsupportedAppRequirementReasons : uint
}
namespace Windows.System.RemoteSystems {
  public sealed class RemoteSystem {
    User User { get; }
    public static RemoteSystemWatcher CreateWatcherForUser(User user);
    public static RemoteSystemWatcher CreateWatcherForUser(User user, IIterable<IRemoteSystemFilter> filters);
  }
  public sealed class RemoteSystemApp {
    string ConnectionToken { get; }
    User User { get; }
  }
  public sealed class RemoteSystemConnectionRequest {
    string ConnectionToken { get; }
    public static RemoteSystemConnectionRequest CreateFromConnectionToken(string connectionToken);
    public static RemoteSystemConnectionRequest CreateFromConnectionTokenForUser(User user, string connectionToken);
  }
  public sealed class RemoteSystemWatcher {
    User User { get; }
  }
}
namespace Windows.UI {
  public sealed class UIContentRoot
  public sealed class UIContext
}
namespace Windows.UI.Composition {
  public enum CompositionBitmapInterpolationMode {
    MagLinearMinLinearMipLinear = 2,
    MagLinearMinLinearMipNearest = 3,
    MagLinearMinNearestMipLinear = 4,
    MagLinearMinNearestMipNearest = 5,
    MagNearestMinLinearMipLinear = 6,
    MagNearestMinLinearMipNearest = 7,
    MagNearestMinNearestMipLinear = 8,
    MagNearestMinNearestMipNearest = 9,
  }
  public sealed class CompositionGraphicsDevice : CompositionObject {
    CompositionMipmapSurface CreateMipmapSurface(SizeInt32 sizePixels, DirectXPixelFormat pixelFormat, DirectXAlphaMode alphaMode);
  }
  public sealed class CompositionMipmapSurface : CompositionObject, ICompositionSurface
  public sealed class CompositionProjectedShadow : CompositionObject
  public sealed class CompositionProjectedShadowCaster : CompositionObject
  public sealed class CompositionProjectedShadowCasterCollection : CompositionObject, IIterable<CompositionProjectedShadowCaster>
  public enum CompositionProjectedShadowDrawOrder
  public sealed class CompositionProjectedShadowReceiver : CompositionObject
  public sealed class CompositionProjectedShadowReceiverUnorderedCollection : CompositionObject, IIterable<CompositionProjectedShadowReceiver>
  public sealed class CompositionRadialGradientBrush : CompositionGradientBrush
  public sealed class CompositionSurfaceBrush : CompositionBrush {
    bool SnapToPixels { get; set; }
  }
  public class CompositionTransform : CompositionObject
  public sealed class CompositionVisualSurface : CompositionObject, ICompositionSurface
  public sealed class Compositor : IClosable {
    CompositionProjectedShadow CreateProjectedShadow();
    CompositionProjectedShadowCaster CreateProjectedShadowCaster();
    CompositionProjectedShadowReceiver CreateProjectedShadowReceiver();
    CompositionRadialGradientBrush CreateRadialGradientBrush();
    CompositionVisualSurface CreateVisualSurface();
  }
  public interface ICompositorPartner_ProjectedShadow
  public interface IVisualElement
}
namespace Windows.UI.Composition.Interactions {
  public enum InteractionBindingAxisModes : uint
  public sealed class InteractionTracker : CompositionObject {
    public static InteractionBindingAxisModes GetBindingMode(InteractionTracker boundTracker1, InteractionTracker boundTracker2);
    public static void SetBindingMode(InteractionTracker boundTracker1, InteractionTracker boundTracker2, InteractionBindingAxisModes axisMode);
  }
  public sealed class InteractionTrackerCustomAnimationStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public sealed class InteractionTrackerIdleStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public sealed class InteractionTrackerInertiaStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public sealed class InteractionTrackerInteractingStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public class VisualInteractionSource : CompositionObject, ICompositionInteractionSource {
    public static VisualInteractionSource CreateFromIVisualElement(IVisualElement source);
  }
}
namespace Windows.UI.Composition.Scenes {
  public enum SceneAlphaMode
  public enum SceneAttributeSemantic
  public sealed class SceneBoundingBox : SceneObject
  public class SceneComponent : SceneObject
  public sealed class SceneComponentCollection : SceneObject, IIterable<SceneComponent>, IVector<SceneComponent>
  public enum SceneComponentType
  public class SceneMaterial : SceneObject
  public class SceneMaterialInput : SceneObject
  public sealed class SceneMesh : SceneObject
  public sealed class SceneMeshMaterialAttributeMap : SceneObject, IIterable<IKeyValuePair<string, SceneAttributeSemantic>>, IMap<string, SceneAttributeSemantic>
  public sealed class SceneMeshRendererComponent : SceneRendererComponent
  public sealed class SceneMetallicRoughnessMaterial : ScenePbrMaterial
  public sealed class SceneModelTransform : CompositionTransform
  public sealed class SceneNode : SceneObject
  public sealed class SceneNodeCollection : SceneObject, IIterable<SceneNode>, IVector<SceneNode>
  public class SceneObject : CompositionObject
  public class ScenePbrMaterial : SceneMaterial
  public class SceneRendererComponent : SceneComponent
  public sealed class SceneSurfaceMaterialInput : SceneMaterialInput
  public sealed class SceneVisual : ContainerVisual
  public enum SceneWrappingMode
}
namespace Windows.UI.Core {
  public sealed class CoreWindow : ICorePointerRedirector, ICoreWindow {
    UIContext UIContext { get; }
  }
}
namespace Windows.UI.Core.Preview {
  public sealed class CoreAppWindowPreview
}
namespace Windows.UI.Input {
  public class AttachableInputObject : IClosable
  public enum GazeInputAccessStatus
  public sealed class InputActivationListener : AttachableInputObject
  public sealed class InputActivationListenerActivationChangedEventArgs
 public enum InputActivationState
}
namespace Windows.UI.Input.Preview {
  public static class InputActivationListenerPreview
}
namespace Windows.UI.Input.Spatial {
  public sealed class SpatialInteractionManager {
    public static bool IsSourceKindSupported(SpatialInteractionSourceKind kind);
  }
  public sealed class SpatialInteractionSource {
    HandMeshObserver TryCreateHandMeshObserver();
    IAsyncOperation<HandMeshObserver> TryCreateHandMeshObserverAsync();
  }
  public sealed class SpatialInteractionSourceState {
    HandPose TryGetHandPose();
  }
  public sealed class SpatialPointerPose {
    EyesPose Eyes { get; }
    bool IsHeadCapturedBySystem { get; }
  }
}
namespace Windows.UI.Notifications {
  public sealed class ToastActivatedEventArgs {
    ValueSet UserInput { get; }
  }
  public sealed class ToastNotification {
    bool ExpiresOnReboot { get; set; }
  }
}
namespace Windows.UI.ViewManagement {
  public sealed class ApplicationView {
    string PersistedStateId { get; set; }
    UIContext UIContext { get; }
    WindowingEnvironment WindowingEnvironment { get; }
    public static void ClearAllPersistedState();
    public static void ClearPersistedState(string key);
    IVectorView<DisplayRegion> GetDisplayRegions();
  }
  public sealed class InputPane {
    public static InputPane GetForUIContext(UIContext context);
  }
  public sealed class UISettings {
    bool AutoHideScrollBars { get; }
    event TypedEventHandler<UISettings, UISettingsAutoHideScrollBarsChangedEventArgs> AutoHideScrollBarsChanged;
  }
  public sealed class UISettingsAutoHideScrollBarsChangedEventArgs
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    public static CoreInputView GetForUIContext(UIContext context);
  }
}
namespace Windows.UI.WindowManagement {
  public sealed class AppWindow
  public sealed class AppWindowChangedEventArgs
  public sealed class AppWindowClosedEventArgs
  public enum AppWindowClosedReason
  public sealed class AppWindowCloseRequestedEventArgs
  public sealed class AppWindowFrame
  public enum AppWindowFrameStyle
  public sealed class AppWindowPlacement
  public class AppWindowPresentationConfiguration
  public enum AppWindowPresentationKind
  public sealed class AppWindowPresenter
  public sealed class AppWindowTitleBar
  public sealed class AppWindowTitleBarOcclusion
  public enum AppWindowTitleBarVisibility
  public sealed class CompactOverlayPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class DefaultPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class DisplayRegion
  public sealed class FullScreenPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class WindowingEnvironment
  public sealed class WindowingEnvironmentAddedEventArgs
  public sealed class WindowingEnvironmentChangedEventArgs
  public enum WindowingEnvironmentKind
  public sealed class WindowingEnvironmentRemovedEventArgs
}
namespace Windows.UI.WindowManagement.Preview {
  public sealed class WindowManagementPreview
}
namespace Windows.UI.Xaml {
  public class UIElement : DependencyObject, IAnimationObject, IVisualElement {
    Vector3 ActualOffset { get; }
    Vector2 ActualSize { get; }
    Shadow Shadow { get; set; }
    public static DependencyProperty ShadowProperty { get; }
    UIContext UIContext { get; }
    XamlRoot XamlRoot { get; set; }
  }
  public class UIElementWeakCollection : IIterable<UIElement>, IVector<UIElement>
  public sealed class Window {
    UIContext UIContext { get; }
  }
  public sealed class XamlRoot
  public sealed class XamlRootChangedEventArgs
}
namespace Windows.UI.Xaml.Controls {
  public sealed class DatePickerFlyoutPresenter : Control {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public class FlyoutPresenter : ContentControl {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public class InkToolbar : Control {
    InkPresenter TargetInkPresenter { get; set; }
    public static DependencyProperty TargetInkPresenterProperty { get; }
  }
  public class MenuFlyoutPresenter : ItemsControl {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public sealed class TimePickerFlyoutPresenter : Control {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public class TwoPaneView : Control
  public enum TwoPaneViewMode
  public enum TwoPaneViewPriority
  public enum TwoPaneViewTallModeConfiguration
  public enum TwoPaneViewWideModeConfiguration
}
namespace Windows.UI.Xaml.Controls.Maps {
  public sealed class MapControl : Control {
    bool CanTiltDown { get; }
    public static DependencyProperty CanTiltDownProperty { get; }
    bool CanTiltUp { get; }
    public static DependencyProperty CanTiltUpProperty { get; }
    bool CanZoomIn { get; }
    public static DependencyProperty CanZoomInProperty { get; }
    bool CanZoomOut { get; }
    public static DependencyProperty CanZoomOutProperty { get; }
  }
  public enum MapLoadingStatus {
    DownloadedMapsManagerUnavailable = 3,
  }
}
namespace Windows.UI.Xaml.Controls.Primitives {
  public sealed class AppBarTemplateSettings : DependencyObject {
    double NegativeCompactVerticalDelta { get; }
    double NegativeHiddenVerticalDelta { get; }
    double NegativeMinimalVerticalDelta { get; }
  }
  public sealed class CommandBarTemplateSettings : DependencyObject {
    double OverflowContentCompactYTranslation { get; }
    double OverflowContentHiddenYTranslation { get; }
    double OverflowContentMinimalYTranslation { get; }
  }
  public class FlyoutBase : DependencyObject {
    bool IsConstrainedToRootBounds { get; }
    bool ShouldConstrainToRootBounds { get; set; }
    public static DependencyProperty ShouldConstrainToRootBoundsProperty { get; }
    XamlRoot XamlRoot { get; set; }
  }
  public sealed class Popup : FrameworkElement {
    bool IsConstrainedToRootBounds { get; }
    bool ShouldConstrainToRootBounds { get; set; }
    public static DependencyProperty ShouldConstrainToRootBoundsProperty { get; }
  }
}
namespace Windows.UI.Xaml.Core.Direct {
  public enum XamlPropertyIndex {
    AppBarTemplateSettings_NegativeCompactVerticalDelta = 2367,
    AppBarTemplateSettings_NegativeHiddenVerticalDelta = 2368,
    AppBarTemplateSettings_NegativeMinimalVerticalDelta = 2369,
    CommandBarTemplateSettings_OverflowContentCompactYTranslation = 2384,
    CommandBarTemplateSettings_OverflowContentHiddenYTranslation = 2385,
    CommandBarTemplateSettings_OverflowContentMinimalYTranslation = 2386,
    FlyoutBase_ShouldConstrainToRootBounds = 2378,
    FlyoutPresenter_IsDefaultShadowEnabled = 2380,
    MenuFlyoutPresenter_IsDefaultShadowEnabled = 2381,
    Popup_ShouldConstrainToRootBounds = 2379,
    ThemeShadow_Receivers = 2279,
    UIElement_ActualOffset = 2382,
    UIElement_ActualSize = 2383,
    UIElement_Shadow = 2130,
  }
  public enum XamlTypeIndex {
    ThemeShadow = 964,
  }
}
namespace Windows.UI.Xaml.Documents {
  public class TextElement : DependencyObject {
    XamlRoot XamlRoot { get; set; }
  }
}
namespace Windows.UI.Xaml.Hosting {
  public sealed class ElementCompositionPreview {
    public static UIElement GetAppWindowContent(AppWindow appWindow);
    public static void SetAppWindowContent(AppWindow appWindow, UIElement xamlContent);
  }
}
namespace Windows.UI.Xaml.Input {
  public sealed class FocusManager {
    public static object GetFocusedElement(XamlRoot xamlRoot);
  }
  public class StandardUICommand : XamlUICommand {
    StandardUICommandKind Kind { get; set; }
  }
}
namespace Windows.UI.Xaml.Media {
  public class AcrylicBrush : XamlCompositionBrushBase {
    IReference<double> TintLuminosityOpacity { get; set; }
    public static DependencyProperty TintLuminosityOpacityProperty { get; }
  }
  public class Shadow : DependencyObject
  public class ThemeShadow : Shadow
  public sealed class VisualTreeHelper {
    public static IVectorView<Popup> GetOpenPopupsForXamlRoot(XamlRoot xamlRoot);
  }
}
namespace Windows.UI.Xaml.Media.Animation {
  public class GravityConnectedAnimationConfiguration : ConnectedAnimationConfiguration {
    bool IsShadowEnabled { get; set; }
  }
}
namespace Windows.Web.Http {
  public sealed class HttpClient : IClosable, IStringable {
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryDeleteAsync(Uri uri);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri, HttpCompletionOption completionOption);
    IAsyncOperationWithProgress<HttpGetBufferResult, HttpProgress> TryGetBufferAsync(Uri uri);
    IAsyncOperationWithProgress<HttpGetInputStreamResult, HttpProgress> TryGetInputStreamAsync(Uri uri);
    IAsyncOperationWithProgress<HttpGetStringResult, HttpProgress> TryGetStringAsync(Uri uri);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPostAsync(Uri uri, IHttpContent content);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPutAsync(Uri uri, IHttpContent content);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request, HttpCompletionOption completionOption);
  }
  public sealed class HttpGetBufferResult : IClosable, IStringable
  public sealed class HttpGetInputStreamResult : IClosable, IStringable
  public sealed class HttpGetStringResult : IClosable, IStringable
  public sealed class HttpRequestResult : IClosable, IStringable
}
namespace Windows.Web.Http.Filters {
  public sealed class HttpBaseProtocolFilter : IClosable, IHttpFilter {
    User User { get; }
    public static HttpBaseProtocolFilter CreateForUser(User user);
  }
}

The post Windows 10 SDK Preview Build 18327 available now! appeared first on Windows Developer Blog.

Adding caching to Azure Pipelines

$
0
0

For a long while, Azure Pipelines users have been asking to improve performance on the hosted build agents by adding caching for common scenarios like package restore. The issue came up in a recent popular Hacker News item, so we wanted to share an update.

Pipeline Caching is starting development now. You can see the design in this PR. Near the end of March, we’ll be releasing restore and save cache tasks that allow you to cache any file or set of files to a cache key of your choice. If you’re building some dependencies over and over or restoring a lot of packages, these tasks can add immediate time savings to your pipelines.

Continuing in Q2, we’ll improve those tasks after we see how you use them in your pipelines. We’ll also be enabling caching by default in the most popular ecosystems (think NuGet, pip, npm) as we learn more about the performance of those tools.

Please share your feedback on the PR; we’ll be reading and evolving the design as we go. Also feel free to get in touch with me and Mitch Denny, the feature owner for Pipeline Caching. We’re eager to hear about your ideas for using caching in your pipelines.

Azure Cost Management now general availability for enterprise agreements and more!

$
0
0

As enterprises accelerate cloud adoption, it is becoming increasingly important to manage cloud costs across the organization. Last September, we announced the public preview of a comprehensive native cost management solution for enterprise customers. We are now excited to announce the general availability (GA) of Azure Cost Management experience that helps organizations visualize, manage, and optimize costs across Azure.

In addition, we are excited to announce the public preview for web direct Pay-As-You-Go customers and Azure Government cloud.

With the addition of the Azure Cost Management, customers now have an always-on, low-latency solution to understand and visualize costs with the following features available in Cost Management:

Cost analysis

This feature allows you to track costs over the course of the month and offers you a variety of ways to analyze your data. To learn more about how to use cost analysis, please visit our documentation, “Quickstart: Explore and analyze costs with Cost analysis.”

Cost analysis dashboard in Azure Cost Management

Budgets

Use budgets to proactively manage costs and drive accountability within your organization. To learn more about using Azure budgets please visit our documentation, “Tutorial: Create and manage Azure budgets.”

Budgets in Azure Cost Management

Budget graph in Azure Cost Management

Exports

Export all your cost data to an Azure storage account using our new exports feature. You can use this data in external systems and combine it with your own data to maximize your cost management capabilities. To learn more about using Azure exports please visit our documentation, “Tutorial: Create and manage exported data.”

New Azure APIs

As a part of this release we are also making the APIs mentioned below available for you to build your own cost management solutions. To learn more about developing on top of our new cost management functionality, please visit the Azure REST API documentation links below.

  • Usage Query – Develop advanced API query calls to learn the most about your organization’s usage and cost patterns.
  • Budgets – Create and view your budgets in an automated fashion.
  • Exports – Automate data export configuration.
  • Usage details by Management Group – Use this API to analyze your organization’s usage across multiple subscriptions.

Alerts (in preview)

View and manage all your alerts in one single place with the new alerts preview feature. In the release you can view budget alerts, monetary commitment alerts, and department spending quota alerts. You can also view active and dismissed alerts.

Cost management alerts in the Azure portal

Getting started

Get started now on this end-to-end cost management and optimization solution that enables you to get the most value for every cloud dollar spent. Please visit the Azure Cost Management documentation page for tutorial and details on getting started.

What’s coming next?

We will continue to iterate additional Cost Management features, so can enjoy a more unified user experience with features like ability to save and schedule reports, additional capabilities in cost analysis, budgets, alerts, and exports, as well as show backs in the coming months.

Partners will also soon be able to leverage the benefits of cost management with our support for the Cloud Solution Provider (CSP) program. With Azure Cost Management, Microsoft is committed to continuing the investment in supporting a multi-cloud environment including Azure and AWS. Public preview for AWS is currently targeted for Q2 of the current calendar year. We plan continue to enhance this with support for other clouds in the near future.

Are you ready for the best part? Azure Cost Management is available for free to all customers and partners to manage Azure costs.

The Cloudyn portal will continue to be available to customers while we integrate all relevant functionality into native Azure Cost Management.

Follow us on Twitter @AzureCostMgmt for exciting cost management updates.

Configure resource group control for your Azure DevTest Lab

$
0
0

As a lab owner, you now have the option to configure all your lab virtual machines (VMs) to be created in a single resource group. This helps prevent you from reaching resource group limits on your Microsoft Azure subscription. The feature will also help by enabling you to consolidate all your lab resources within a single resource group. In result this will simplify tracking those resources and applying policies to manage them at the resource group level. This article will discuss improving governance of your development and test environments by using Azure polices that you can apply at the resource group level.

This feature allows you to use a script to either specify a new or existing resource group within your Azure subscription for all your lab VMs to be created in. It is important to note that currently we support this feature through an API, however we will soon be adding an in-product experience for you to configure this setting for your lab.

Now let’s walk through the options you have as a lab owner while using this API:

  • You can choose the lab’s resource group for all VMs to be created in going forward.
  • You can choose an existing resource group other than the lab's resource group for all VMs to be created in going forward.
  • You can enter a new resource group name for all VMs to be created in going forward.
  • You can also continue with the existing behavior.

This setting will apply to new VMs created in the lab. This means older VMs in your lab that are created in their own resource groups will continue to remain unaffected. However, you can migrate these VMs from their individual resource groups to the common resource group you selected initially, allowing all your lab VMs to be in one common resource group going forward. You can learn more about migrating resources across resource groups by visiting our documentation, “Move resources to new resource group or subscription.” ARM environments created in your lab will continue to remain in their own resource groups and will not be affected by any option you select while working with this API.

You can also learn more about how to use this API along with an example script by visiting our documentation, “About Azure DevTest Labs.” We hope you find this feature useful!

Got an idea to make it work better for you? Submit your feedback and ideas, or vote for others at Azure DevTest Labs UserVoice forum. Have a question? Check out the answers or ask a new one at our MSDN forum.

Reserved instances now applicable to classic VMs, cloud services, and Dev/Test subscriptions

$
0
0

Expanding reserved instances discounts to classic virtual machines, Azure Cloud Services, and Dev/Test subscriptions

Today, we are excited to announce two new Azure Reserved VM Instances’ (RI) features to provide our customers with additional savings and purchase controls.

Since launch, we have continued to add multiple features such as instance size flexibility, RIs for US Government regions, purchase recommendations, and RIs in the Cloud Solution Provider (CSP) channel. We have also extended the capability to provide reservation discounts on SQL Databases and Cosmos DB.

Features that we are launching today:

1. Classic VMs and Cloud Services users can now benefit from the RI discounts

RIs with the instance size flexibility option enabled will now apply the discount to both classic VMs and cloud services. For cloud services, the reservation discount applies only to the compute cost. When the reservation discount is applied to cloud services, the usage charges will be split into compute charges (Linux meter) and a cloud services charges (cloud services management meter). Learn how the reservation discount applies to Cloud Services.

2. Enterprise Dev/Test and Pay-As-You-Go Dev/Test subscriptions can now benefit from the RI discounts

Newly purchased RIs or existing RIs can now be applied to your Dev/Test subscriptions. VM usage on Dev/Test subscriptions will be automatically eligible for the RI discount and all existing reservations with shared scope will be updated to apply discounts to Dev/Test subscriptions.

Next steps

Visual Studio Code January 2019

Using VS Code for C++ development with containers

$
0
0

This post builds on using multi-stage containers for C++ development. That post showed how to use a single Dockerfile to describe a build stage and a deployment stage resulting in a container optimized for deployment. It did not show you how to use a containers with your development environment. Here we will show how to use those containers with VS Code. The source for this article is the same as that of the previous article: the findfaces GitHub repo.

Creating a container for use with VS Code

VS Code has the capability to target a remote system for debugging. Couple that with a custom build task for compiling in your container and you will have an interactive containerized C++ development environment.

We’ll need to change our container definition a bit to enable using it with VS Code. These instructions are based on some base container definitions that David Ducatel has provided in this GitHub repo. What we’re doing here is taking those techniques and applying them to our own container definition. Let’s look at another Dockerfile for use with VS Code, Dockerfile.vs.

FROM findfaces/build

LABEL description="Container for use with VS"

RUN apk update && apk add --no-cache 
    gdb openssh rsync zip

RUN echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config && 
    echo 'PermitEmptyPasswords yes' >> /etc/ssh/sshd_config && 
    echo 'PasswordAuthentication yes' >> /etc/ssh/sshd_config && 
    ssh-keygen -A

EXPOSE 22 
CMD ["/usr/sbin/sshd", "-D"]

In the FROM statement we’re basing this definition on the local image we created earlier in our multi-stage build. That container already has all our basic development prerequisites, but for VS Code usage we need a few more things enumerated above. Notably, we need SSH for communication with VS Code for debugging which is configured in the RUN command. As we are enabling root login, this container definition is not appropriate for anything other than local development. The entry point for this container is SSH specified in the CMD line. Building this container is simple.

docker build -t findfaces/vs -f Dockerfile.vs .

We need to specify a bit more to run a container based on this image so VS Code can debug processes in it.

docker run -d -p 12345:22 --security-opt seccomp:unconfined -v c:/source/repos/findfaces/src:/source --name findfacesvscode findfaces/vs

One of the new parameters we haven’t covered before is –security-opt. As debugging requires running privileged operations, we’re running the container in unconfined mode. The other new parameter we’re using is -v, which creates a bind mount that maps our local file system into the container. This is so that when we edit files on our host those changes are available in the container without having to rebuild the image or copy them into the running container. If you look at Docker’s documentation, you’ll find that volumes are usually preferred over bind mounts today. However, sharing source code with a container is considered a good use of a bind mount. Note that our build container copied our src directory to /src. Therefore in this container definition we will use interactively we are mapping our local src directory to /source so it doesn’t conflict with what is already present in the build container.

Building C++ in a container with VS Code

First, let’s configure our build task. This task has already been created in tasks.json under the .vscode folder in the repo we’re using with this post. To configure it in a new project, press Ctrl+Shift+B and follow the prompts until you get to “other”. Our configured build task appears as follows.

{
    "version": "2.0.0",
    "tasks": [
        {
            "label": "build",
            "type": "shell",
            "command": "ssh",
            "args": [
                "root@localhost",
                "-p",
                "34568",
                "/source/build.sh"
            ],
            "problemMatcher": [
                "$gcc"
            ]
        }
    ]
}

The “label” value tells VS Code this is our build task and the type that we’re running a command in the shell. The command here is ssh (which is available on Windows 10). The arguments are passing the parameters to ssh to login to the container with the correct port and run a script. The content of that script reads as follows.

cd /source/output && 
cmake .. -DCMAKE_BUILD_TYPE=Debug -DCMAKE_TOOLCHAIN_FILE=/tmp/vcpkg/scripts/buildsystems/vcpkg.cmake -DVCPKG_TARGET_TRIPLET=x64-linux-musl && 
make

You can see that this script just invokes CMake in our output directory, then builds our project. The trick is that we are invoking this via ssh in our container. After this is set up, you can run a build at any time from within VS Code, as long as your container is running.

Debugging C++ in a container with VS Code

To bring up the Debug view click the Debug icon in the Activity Bar. Tasks.json has already been created in the .vscode folder of the repo for this post. To create one in a new project, select the configure icon and follow the prompts to choose any configuration. The configuration we need is not one of the default options, so once you have your tasks.json select Add Configuration and choose C/C++: (gdb) Pipe Launch. The Pipe Launch configuration starts a tunnel, usually SSH, to connect to a remote machine and pipe debug commands through.

You’ll want to modify the following options in the generated Pipe Launch configuration.

            "program": "/source/output/findfaces",
            "args": [],
            "stopAtEntry": true,
            "cwd": "/source/out",

The above parameters in the configuration specify the program to launch on the remote system, any arguments, whether to stop at entry, and what the current working directory on the remote is. The next block shows how to start the pipe.

            "pipeTransport": {
                "debuggerPath": "/usr/bin/gdb",
                "pipeProgram": "C:/Windows/system32/OpenSSH/ssh.exe",
                "pipeArgs": [
                    "root@localhost",
                    "-p",
                    "34568"
                ],
                "pipeCwd": ""
            },

You’ll note here that “pipeProgram” is not just “ssh”, the full path to the executable is required. The path in the example above is the full path to ssh on Windows, it will be different on other systems. The pipe arguments are just the parameters to pass to ssh to start the remote connection. The debugger path option is the default and is correct for this example.
We need to add one new parameter at the end of the configuration.

            "sourceFileMap": {
                "/source": "c:/source/repos/findfaces/src"
            }

This option tells the debugger to map /source on the remote to our local path so that our sources our properly found.

Hit F5 to start debugging in the container. The provided launch.json is configured to break on entry so you can immediately see it is working.

IntelliSense for C++ with a container

There are a couple of ways you can setup IntelliSense for use with your C++ code intended for use in a container. Throughout this series of posts we have been using vcpkg to get our libraries. If you use vcpkg on your host system, and have acquired the same libraries using it, then your IntelliSense should work for your libraries.

System headers are another thing. If you are working on Mac or Linux perhaps they are close enough that you are not concerned with configuring this. If you are on Windows, or you want your IntelliSense to exactly match your target system, you will need to get your headers onto your local machine. While your container is running, you can use scp to accomplish this (which is available on Windows 10). Create a directory where you want to save your headers, navigate there in your shell, and run the following command.

scp -r -P 12345 root@localhost:/usr/include .

To get the remote vcpkg headers you can similarly do the following.

scp -r -P 12345 root@localhost:/tmp/vcpkg/installed/x64-linux-musl/include .

As an alternative to scp, you can also use Docker directly to get your headers. For this command the container need not be running.

docker cp -L findfacesvs:/usr/include .

Now you can configure your C++ IntelliSense to use those locations.

Keeping up with your containers

When you are done with your development simply stop the container.

docker stop findfacesvscode

The next time you need it spin it back up.

docker start findfacesvscode

And of course, you need to rerun your multi-stage build to populate your runtime container with your changes.

docker build -t findfaces/run .

Remember that in this example we have our output configured under our source directory on the host. That directory will be copied into the build container if you don’t delete it (which you don’t want), so delete the output directory contents before rebuilding your containers (or adjust your scripts to avoid this issue).

What next

We plan to continue our exploration of containers in future posts. Looking forward, we will introduce a helper container that provides a proxy for our service and to deploy our containers to Azure. We will also revisit this application using Windows containers in the future.

Give us feedback

We’d love to hear from you about what you’d like to see covered in the future about containers. We’re excited to see more people in the C++ community start producing their own content about using C++ with containers. Despite the huge potential for C++ in the cloud with containers, there is very little material out there today.

If you could spare a few minutes to take our C++ cloud and container development survey, it will help us focus on topics that are important to you on the blog and in the form of product improvements.

As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems or have a suggestion for Visual Studio please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer Community. You can also find us on Twitter (@VisualC).

 

 


Changes to the web and JSON editor APIs in Visual Studio 2019

$
0
0

In Visual Studio 2019 Preview 2, The Web Tools team made some changes to improve extensibility features for extension developers. To standardize interfaces, the CSS, HTML, JSON and CSHTML editors renamed their assemblies as per the following table:

Old New
Microsoft.CSS.Core Microsoft.WebTools.Languages.Css
Microsoft.CSS.Editor Microsoft.WebTools.Languages.Css.Editor
Microsoft.Html.Core Microsoft.WebTools.Languages.Html
Microsoft.Html.Editor Microsoft.WebTools.Languages.Html.Editor
Microsoft.VisualStudio.Html.Package Microsoft.WebTools.Languages.Html.VS
Microsoft.JSON.Core Microsoft.WebTools.Languages.Json
Microsoft.JSON.Editor Microsoft.WebTools.Languages.Json.Editor
Microsoft.VisualStudio.JSON.Package Microsoft.WebTools.Languages.Json.VS
Microsoft.VisualStudio.Web.Extensions Microsoft.WebTools.Languages.Extensions
Microsoft.Web.Core Microsoft.WebTools.Languages.Shared
Microsoft.Web.Editor Microsoft.WebTools.Languages.Shared.Editor

To avoid potential parse issues, the JSON parse tree changed behavior. When you call JsonParserService.GetTreeAsync, you now get a snapshot of the JSON parse tree. As an extension developer, you can now request and maintain snapshots of the JSON parse tree.

The Twins Challenge: Office 365 crushes Office 2019

Excel with Microsoft Excel in Office 365

5 time-saving tips for PowerPoint in Office 365

5 tricks for Word in Office 365

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>