Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Accelerating blockchain adoption in the enterprise

$
0
0

3D blockchain model rendering

The mission for our blockchain investments has remained steadfast since the beginning, with the goal of democratizing complex technologies and creating connections across organizational boundaries to solve shared problems in a trusted manner. With this in mind, our roadmap has been focused on building an open and scalable platform to advance the adoption of blockchain in the enterprise.

We have seen this approach resonate with our customers time and time again. A real-world example powered by Azure blockchain technology is GE Aviation’s next-generation blockchain solution that tracks the genealogy of engine parts to improve productivity and safety for airlines. 

"Microsoft has taken blockchain from an art and transformed it into a science by fundamentally improving each aspect of the software stack. We’ve benefited from the innovation of no-code logic apps for data ingestion, off-chain storage with Azure SQL Database and Azure Cosmos DB, and analytics visualization via Power BI. The rich integration of these Azure services with Azure Blockchain Service and Azure Blockchain Workbench have enabled us to get our solutions into the hands of our customers much faster with a lot less complexity." - David Havera, Blockchain Leader, GE Aviation Digital Group

With this in mind, we're announcing a wave of innovation designed to simplify and accelerate blockchain adoption across the whole stack.

Accelerating blockchain adoption with Microsoft Azure Blockchain Tokens

Since launching Azure Blockchain Service, our customers have asked for a simple mechanism to tokenize physical or digital assets to accelerate blockchain deployments. Today, we’re launching the preview of Azure Blockchain Tokens, which simplifies the ability to define, create, and manage compliant tokens that are built on industry standards. Azure Blockchain Tokens (preview) provide pre-built templates for common scenarios and will support a gallery of templates created by partners in the future. With this latest offering, we can now offer customers an end-to-end experience of easily creating and managing tokens for physical or digital assets via Azure Blockchain Tokens (preview), in addition to managing the blockchain network itself via Azure Blockchain Service.

CEEK Virtual Reality, a streaming platform for live and recorded virtual and augmented reality experiences, uses Azure Blockchain Tokens to create a trusted platform for royalty payments. Smart tickets (a form of a token) allow content creators to track content viewership, ensuring royalty payout to creators is based on trusted data.

"CEEK Virtual Reality was looking for a trusted partner to help us with content viewership verification on the blockchain, and Azure Blockchain Tokens was perfect because it helped to drastically reduce our time to market and offered a trusted partner for providing proof on the blockchain." - Mary Spio, CTO, CEEK VR

Enhancing Azure Blockchain Service with blockchain data manager and additional ledger choice

Azure Blockchain Service has seen fantastic adoption since launch, with customers using it to simplify the management and formation of their blockchain networks so they can focus on business logic. Today, we’re making Azure Blockchain Service even better with the preview of blockchain data manager. Blockchain data manager (preview) is a new feature of Azure Blockchain Service that captures blockchain ledger data, transforms it (including decoding encrypted event and property state data), and then delivers that data to multiple sources via Azure Event Grid to off-chain databases like Azure Cosmos DB or Azure SQL Database. Blockchain data manager (preview) supports both public and private transaction data and greatly simplifies the cumbersome task of integrating existing applications with data that sits on a blockchain ledger.

In addition to simplifying blockchain data integration into existing applications, providing choice and flexibility is central to our investments in Azure Blockchain Service. Corda Enterprise joins Ethereum as an additional distributed ledger technology available within the service. For customers who prefer Hyperledger Fabric, an Azure Marketplace template using Azure Kubernetes Service is available for use starting today.

Investing in developer tools

Of course, accelerating blockchain enterprise adoption is only possible with developers. We are continuing to build on our investments for blockchain developers with updates to the  Azure Blockchain Development Kit for Ethereum extension for Visual Studio Code. These investments improve the productivity of developers, whether they are building an application on top of a blockchain network, or connecting a backend system to produce or consume blockchain data.

Recent investments in popular tools like OpenZeppelin integration provide easy discoverability and use of popular smart contracts for common developer needs. In addition to our focus on private blockchain developers, we are making sure public blockchain developers are equally well supported with investments in public chain tools, including Infura project integration. Adding native Infura integration to our Visual Studio Code extension makes it easy to create, interact with, and deploy to Infura projects. These developer tools integrate with Visual Studio Code and available free of charge.

Continuous innovation supports your blockchain journey

Blockchain is an exciting and dynamic industry, and we remain committed to simplifying adoption in the enterprise across scenarios like supply chain visibility and traceability and royalty reconciliation, among others. With investments that span the whole stack—from developer tools to Azure infrastructure services and Azure managed services—enterprise adoption of blockchain is easier than ever.

Next steps


Azure. Invent with purpose.


Accelerating cloud-native application development in the enterprise

$
0
0

Each day more and more organizations experience the benefits of cloud native development. Using products like Azure Kubernetes Service (AKS), they’re able to build distributed applications that are more resilient and dynamically scalable, while enabling portability in the cloud and at the edge. Most of all, organizations want to use Kubernetes and cloud native technology to innovate faster in the enterprise where security, governance, and compliance are top of mind. We have been listening and we are happy to share several innovations designed to accelerate cloud native application delivery on Azure, powered by Kubernetes and AKS.

Streamlined developer experience

Git and GitHub have changed the way modern software is written. Pull requests (PRs) are now central to the how development teams collaborate. While PRs are a great way to review specific code changes, it can be difficult to see how that code integrates with the rest of a complex microservices architecture. Dev Spaces with GitHub Actions PR flow for AKS solves this problem by automatically deploying review versions of your pull requests to a sandbox environment where you can easily perform end-to-end testing on any changes in your pull request branch. This speeds the PR testing process, allows team members to confidently approve pull requests after ensuring that the new changes will not negatively impact other parts of the application. It also enables other team members, such as product managers and designers, to easily participate in the review process. 

Dev Spaces connect, available in preview, allows developers to develop and test an individual service on their local workstation in the context of the broader application running in a shared AKS cluster, all without affecting other processes running in that cluster. With tools like Dev Spaces and Visual Studio Code Kubernetes extension, we help customers accelerate their containerized app development. It’s great to see a leading firm like Forrester state in a recent report that Microsoft  “leads the pack with the strongest developer experience and global reach.”

Reliable and scalable Kubernetes clusters

As enterprises continue to adopt Kubernetes and AKS at an incredible rate, we see an increasing number of mission-critical customer workloads that have strenuous requirements around reliability and scalability. AKS support for availability zones, cluster-level autoscaling, and multiple node pool support are now generally available. As Bosch has shared, Azure provides a simplified Kubernetes experience and helps you deliver reliable and scalable service more easily. It’s click and scale, or better yet, scale automatically using the autoscaling functionality in AKS.

For customers who need to operate across the globe, AKS is also now available in 36 regions including Germany West Central, Switzerland North, Switzerland West and UAE North, more regions of managed Kubernetes than any other cloud.

Operate seamlessly on-premises, in the cloud, and at the edge

The use of Kubernetes is growing everywhere. It’s growing in the cloud with products like AKS, but it’s also growing beyond cloud with clusters sprouting up on-premises and on the edge. To help our customers manage and govern these environments, we are introducing Azure Arc enabled Kubernetes clusters. By installing an agent on your Kubernetes cluster, you can now register your Kubernetes clusters in Azure no matter where they are running and provide a unified management and governance model, including centralized policy controls, role-based access control (RBAC), and configuration management through a simple GitOps workflow. This means you can use a simple GitHub pull request flow to securely deploy workloads to hundreds or thousands of Kubernetes clusters, all managed from the Azure portal.

Looking for a way to get a Microsoft-supported version of Kubernetes running on premises, on the edge, or even in a fully disconnected environment? Microsoft offers Kubernetes across our Azure Stack Hub portfolio of products. Kubernetes on Azure Stack Hub is now generally available featuring cluster lifecycle management capabilities. You can now easily provision Kubernetes clusters on Azure Stack Hub and automate the creation, update, patching, scaling and deletion of these clusters using simple command line tools. We are also introducing Kubernetes on Azure Stack Edge, which is an Azure-managed edge computing appliance with either FPGA or new GPA acceleration for powerful machine learning inferencing capabilities. Azure Stack Edge simplifies Kubernetes operations by automatically creating a cluster of appliances and connecting it to the cloud for you, where you can use Azure Arc to deploy and configure applications across all your Kubernetes clusters.

Easily monitor and troubleshoot

Kubernetes and cloud native systems have many moving parts. Managing these systems at scale requires top notch monitoring and observability tools. One such tool is Prometheus, a Cloud Native Computing Foundation (CNCF) project which has emerged as the standard mechanism for gathering metrics in the cloud native ecosystem. Prometheus integration with Azure Monitor is now generally available. Azure Monitor can now scrape your Prometheus metrics and store them on your behalf, without you having to operate your own Prometheus collection and storage infrastructure. We have Grafana templates so you can visualize the performance data from AKS. Today we are also introducing live container metrics from Azure Monitor. Live metrics and deployments, combined with live logs and events capabilities, provide a real-time view of what’s happening in AKS clusters and deployments, helping to diagnose and resolve issues faster than ever. Check out how Hafslund Nett has leveraged Azure Monitor together with AKS to speed development and testing without losing control over security and performance.

A secure, enterprise-grade foundation

Kubernetes and cloud native models can be challenging to secure and govern. This is especially true for container images, which can house new classes of operating system and library vulnerabilities. To address it, Azure Security Center performs vulnerability assessments on container images stored in Azure Container Registry. It can now scan the container registries within a customer’s subscription and provide recommendations to address specific vulnerabilities. We are also introducing a new set of threat protection features from Azure Security Center including discovery of AKS clusters in your cloud environment, actionable recommendations on how to help your clusters comply with security best practices, and threat detection based on host and cluster analytics.

The cloud-native space continues to evolve rapidly, with new technologies and patterns emerging every day. The pace of innovation is exciting, but it can also be frightening especially for more conservative enterprises. With these innovations, we are further lowering the barriers to adopt cloud-native technologies. If you are new to Kubernetes, check out Kubernetes overview, learning videos and workshop.  Kubernetes is defining the future of applications. Join thousands of Azure Kubernetes customers and start your Kubernetes journey with Azure.


Azure. Invent with purpose.

New Azure Security Center and Azure platform security capabilities

$
0
0

At Microsoft Ignite we're sharing the many new capabilities our teams have built to improve security with Azure Security Center and the Azure Platform. We have a long list of new innovations, and this blog provides our general direction and summarizes some of our favorite new features. For more information, you can read all the details in our Azure Security Center Community post.

Turn on the protection you need with Azure Security Center

Azure Security Center provides unified infrastructure security management that strengthens security posture and provides advanced threat protection across your workloads running in Azure, on-premises, and in other clouds. It enables continuous assessment of security posture, protects against cyberattacks using Microsoft’s vast threat intelligence, and helps implement security faster with integrated controls.

A screenshot of the Azure Security Center overview tab.

With Security Center, you can monitor the security of machines, networks, and Azure services using hundreds of built-in security assessments or create your own in a central dashboard.

Extending Azure Security Center’s coverage with a platform for community and partners

A constantly evolving threat landscape requires new approaches to protection, cloud security posture, enterprise-scale deployment, and automation. Through partnering with members of the Microsoft Intelligent Security Association, Microsoft is able to leverage a vast knowledge pool to defend against a world of increasing cybersecurity threats.

Leverage all of Security Center's capabilities against built-in and partner recommendations. Azure Security Center's simple onboarding flow connects existing solutions, including Check Point CloudGuard, CyberArk, and Tenable, enabling you to view all security posture recommendations in a single place. Run unified reports and export Security Center’s recommendations for connected partner products.

We invite users to contribute and help improve policies and configurations used in Security Center through the Azure Security Center community menu for additional scripts, content, and community resources.

Screenshot of the Azure Security Center Community page.

Enhanced threat protection for cloud resources

Threat protection detects and prevents attacks across a wide variety of services, from infrastructure as a service (IaaS) layer to platform as a service (PaaS) resources in Azure, including Azure IoT and Azure App Service, and on-premises virtual machines.

Stream threat detection findings to Azure Sentinel for investigation, threat hunting, correlation with signals from other security solutions, and security operations center (SOC) level management.

The latest threat protection capabilities include:

  • Threat protection and vulnerability assessment support for SQL Server hosted on an Azure Virtual Machine.
  • Vulnerability assessment capabilities for VMs is part of our virtual machine protection offering (powered by Qualys) at no additional cost. Security Center collects the vulnerabilities and displays them as part of the secure score.
  • Threat protection suite for containers focusing on Azure Kubernetes Service (AKS) includes scanning of container images for vulnerabilities, secure configuration of the AKS cluster, and threat detection on the Kubernetes runtime activities.
  • Threat protection for Azure Key Vault is in preview in North America regions. This provides an additional layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your encryption keys, certificates, and secrets in Azure Key Vault.
  • Threat protection for Azure Storage offers new detections powered by Microsoft Threat Intelligence for detecting malware uploads to Azure Storage using hash reputation analysis and suspicious access from an active Tor exit node (an anonymizing proxy.) You can now view detected malware across storage accounts using Azure Security Center.

Cloud security posture management enhancements

Misconfiguration is the most common cause of security breaches for cloud workloads. Security Center provides a bird’s eye security posture view across your Azure environment, enabling you to continuously monitor and improve your security posture using the Azure secure score. Security Center helps manage and enforce your security policies to identify and fix misconfigurations across different resources and maintain compliance.

New capabilities:

  • Secure score simplified: Use the updated, percentage based secure score to get better visibility into the secure score controls and provide a more reliable method for calculating the score.
  • Address misconfigurations faster with new quick-fix capabilities.
  • Add custom assessments, created in Azure Policy, into the secure score and monitor their compliance state in Security Center.
  • Automatically assess compliance state against a new set of regulatory standards, including NIST SP 800-53 R4, SWIFT CSP CSCF v2020, Canada Federal PBMM, and UK Official together with UK NHS.

Misconfigurations are the leading source of attacks and improving your secure score can make a remarkable difference in your overall security posture.

Implement security faster with Azure Security Center

To enable large organizations to leverage Security Center’s findings in enterprise-scale, Azure Security Center continues to provide clear APIs, automation, and management capabilities that can help customers connect Security Center to workflows, processes, and tools used across the organization.

A new capability in Security Center enables the creation of rich workflows using Azure Logic Apps and policies trigger based on a recommendation or alert. Configure a logic app to perform a custom action supported by the vast community of Logic App connectors, or use one of the templates provided, including to send an email or open a service ticket.

Security from the ground up

In addition to Azure Security Center updates, we have several additional enhancements for Azure platform security. To empower you to do more, we are continuously enhancing the platform services to improve existing offerings and address your feedback.

Here are some of the exciting updates coming to the platform. 

Extension of Customer Lockbox for Microsoft Azure beyond virtual machines

Customer Lockbox provides customers the capability to control Azure support engineers' access to workloads that contain customer data This expanded support now provides customers control over access to their data for a larger set of Azure offerings.

New services and scenarios, available in preview:

  • Azure Storage
  • Azure SQL Database
  • Azure Data Explorer
  • Memory dumps and managed disks for Azure Virtual Machines
  • Transferring Azure subscriptions

Release of Microsoft Secure Code Analysis toolkit to help you build secure code

With the Microsoft Security Code Analysis extension, you can infuse security analysis tools including Credential Scanner, BinSkim, and others into your Azure DevOps continuous integration and delivery (CI/CD) pipelines. Increase developer productivity and simplify security through easily configurable build tasks that abstract away the complexities (installing, updating, maintaining, and running) from analysis tools without relinquishing control over them. 

This product is now available via Unified Support. Customers can sign up using their existing credit or paying the service fee. To learn more please visit the Microsoft Secure Code Analysis documentation page.

Azure Disk Encryption in more places, and more services offering customer-managed keys

Azure Disk Encryption enables you to encrypt your Azure Virtual Machine disks with your keys safeguarded in Azure Key Vault. Previously this capability was available through PowerShell and CLI. We have now added this capability to the Azure portal, which makes it very easy to use. We have also added support for the latest versions of the common Linux distros on Azure, including Red Hat Enterprise Linux 7.6 and 7.7 as well as CentOS Linux 7.6 and 7.7.

Try it yourself using Quickstart for Windows or Quickstart for Linux now.

The following services recently announced preview for customer-managed keys for encryption at rest.

  • Azure Event Hubs
  • Azure Managed Disks
  • Power BI

For a full list of services offering encryption with customer-managed keys, see the Azure Data Encryption-at-Rest documentation page.

New Azure policies to manage certificates across your organization, currently in preview

Large organizations have thousands of certificates in key vaults distributed across thousands of applications and subscriptions. If you are responsible for security and compliance across the organization, you need a simple way to set rules across all these certificates, prove that those rules were followed, and flag violations. Azure policy helps with this. We have added new policies in preview for certificates in Azure Key Vault.

  • Issuer Policy: Flag certificates that are (or are not) issued by a particular issuer.
  • Key Type Policy: Flag certificates that are (or are not) protected by a RSA or ECC key pairs.
  • Key Size Policy: Flag certificates that are (or are not protected) by a key of a certain size.
  • Expiry Policy: Flag certificates that are (or are not) renewed within “X” number of days of their expiry date.
  • Validity Lifespan Policy: Flag certificates that have (or do not have) Validity Lifespan that is less than, or more than, or equal to "X" number of years.

For more information see the documentation for Azure Key Vault governance policies.

Azure Key Vault Virtual Machine extension now generally available

The Azure Key Vault Virtual Machine extension makes it easier for apps running on virtual machines to use certificates from a key vault, by abstracting the common tasks as well as best practices—authenticate, handle common network errors, cache, periodically refresh the certificate from the key vault, and bind the certificate for Transport Layer Security (TLS).

This extension is now generally available for Windows and Linux.

Free Azure managed certificates for your domains on Azure

We want to make sure there are no reasons not to use TLS in your Azure applications. Azure now provides TLS certificates at no cost to you for your custom domains hosted on the following services. Azure renews these certificates automatically.

  • Azure CDN managed certificates (generally available.)
  • Azure Front Door managed certificates (generally available.)
  • Azure App Service managed certificates for both web apps and functions (currently in preview.)

We will expand this to other Azure PaaS services in the future.

Note that this is just one of your options. If you have a need to use certificates from a different certificate authority (CA), then you have the option to configure these Azure services to use a certificate you manage in your key vault.

Learn more

With these additions, Azure continues to provide a secure foundation and gives you built-in security tools and intelligent insights to help you rapidly improve your security posture in the cloud. Azure Security Center strengthens its role as the unified security management and advanced threat protection solution for your hybrid cloud.

For Azure app developers:

For users responsible for security across their organizations:

  • Evaluate Azure Policy, including the new Key Vault policies, to ensure developers across your organization follow the rules you set for security and compliance.

Security can’t wait. Get started with Azure Security Center today and visit Azure Security Center Tech Community, where you can engage with other security-minded users like yourselves.


Azure. Invent with purpose.

Secure and compliant APIs for a hybrid and multi-cloud world

$
0
0

APIs are everywhere. The broad proliferation of applications throughout enterprises often results in large silos of opaque processes and services, making it hard for IT to manage and govern APIs in a systematic way, and for development teams to gain visibility into and make use of APIs that already exist.

Entire industries, such as financial services, are embracing APIs as a means to become more open, for example with open banking initiatives. Open banking is an API-first approach to creating more open, rich ecosystems that encourage third-party participation and usage of the services financial institutions have previously kept behind the scenes.

Products, such as Azure API Management, were created to address these issues. By letting you manage all APIs in a single, centralized location, you are able to impose authentication, authorization, throttling, and transformation policies and easily monitor the usage of the APIs associated with your applications, giving you the much-needed visibility into your application portfolio(s) at a macro-level.

To succeed in an increasingly connected world, it is key to adopt an API-first approach that lets you:

  • Embrace innovation by creating vibrant API ecosystems.
  • Secure and manage APIs seamlessly in a hybrid world.

APIs can be a bridge to the uncertain future and help you safely traverse over turbulent waters.

Embrace innovation by creating vibrant API ecosystems

Microsoft offers all of the tools to be able to immediately capitalize on new opportunities as they emerge in the business landscape. Our infrastructure technologies, such as Kubernetes and serverless computing, accelerate development velocity and help developers move faster than ever before. Our API technologies, such as API management, accelerate the speed at which new opportunities can be acted upon, by immediately providing channels for partners, developers, customers, and other third-parties to leverage new technology which is created. These types of activities are often done with tools such as an API developer portal.

Azure API Management’s developer portal lets you easily grant access (and control) to APIs. The developer portal provides documentation on how to use the APIs and creates a simple, easy way for people to get started. A developer portal is an integral part of any API-first approach, which is why we’re announcing the general availability of our greatly improved developer portal experience.

You can now easily customize the developer portal with a visual user interface, helping create a branded experience. The developer portal is open-source and built with extensibility in mind. You can easily fork our exacting repository and customize it to meet your needs. It was created using contemporary JAMstack technologies that significantly reduce page load times, to make it as frictionless of user experience as possible.

You can learn more about this announcement by reading our Azure Update on the release.

Secure and manage APIs seamlessly in a hybrid world

Today’s most popular API management solutions run in public clouds. And while having a purely cloud-based API management service can work for pretty much all scenarios, it’s not always the best choice. Perhaps compliance requirements mandate that information must stay on the corporate network, or maybe accessing the cloud is prohibited by company policy. Whatever the reason, scenarios like this can’t use an API management service running in any public cloud; the service must run on-premises.

To meet your hybrid requirements, we’re announcing the preview of Azure Arc enabled API Management, a self-hosted API gateway. The new self-hosted API gateway doesn’t replace the primary cloud-based API management service. Instead, it augments this service by providing the essential aspects of API management in software that organizations can run wherever they choose.

Azure Arc enabled API management

It adds a containerized version of the Azure API Management gateway you can host on-premises or another environment that supports the deployment of Docker containers. It enables more efficient call patterns for internal-only and internal and external APIs and is managed from a cloud-based Azure API Management instance. Azure Arc enabled API Management enables you to run the self-hosted API management gateway in your own on-premises datacenter or run the self-hosted API management gateway in another cloud.

Read the whitepaper we’ve released, API management in a hybrid and multi-cloud world, which goes into further detail technical detail on Azure Arc enabled API Management, as well as the strategic benefits you receive when adopting this approach.

Or, you can start a free trial of Microsoft Azure and check out API Management for yourself.

Heading into the future

APIs are the way that businesses will continue to communicate. The growth of APIs has continued to increase, and the rise of the API product is happening right now. Many different companies now offer API-first products and are a powerful reminder that a well thought out API strategy is going to be key to any business' strategy moving forward.

To learn more about what APIs and API Management can do for you, you can visit API Management on Azure.


Azure. Invent with purpose.

Azure Cognitive Services for building enterprise ready scalable AI solutions

$
0
0

This post is co-authored by Tina Coll, Senior Product Marketing Manager, Azure Cognitive Services and Anny Dow, Product Marketing Manager, Azure Cognitive Services.

Azure Cognitive Services brings artificial intelligence (AI) within reach of every developer without requiring machine learning expertise. All it takes is an API call to embed the ability to see, hear, speak, understand, and accelerate decision-making into your apps. Enterprises have taken these pre-built and custom AI capabilities to deliver more engaging and personalized intelligent experiences. We’re continuing the momentum from Microsoft Build 2019 by making Personalizer generally available, and introducing additional advanced capabilities in Vision, Speech, and Language categories. With many advancements to share, let’s dive right in.

Personalizer: Powering rich user experiences

Winner of this year’s ‘Most Innovative Product’ award at O’Reilly’s Strata Conference, Personalizer is the only AI service on the market that makes reinforcement learning available at-scale through easy-to-use APIs. Personalizer is powered by reinforcement learning and provides developers a way to create rich, personalized experiences for users, even if they do not necessarily have deep machine learning expertise.

Giving customers what they want at any given moment is one of the biggest challenges faced by retail, media, and e-commerce businesses today. Whether it’s applying randomized A/B tests or supervised machine learning, businesses struggle to keep up with delivering unique and relevant experiences to each user. This is where Personalizer comes in, exploring new options to stay atop of previously unencountered influences on user behavior through a cutting-edge machine learning technique known as reinforcement learning. This technique allows Personalizer to learn from what’s happening in the world in real-time and update the underlying algorithm as frequently as every few minutes. The result is a significant improvement to your app usability and user satisfaction. When XBOX implemented Personalizer on their homepage, they saw a 40 percent lift in user engagement.  illustration of a data scientist and reinforcement learning cycle that drives personalization

Form Recognizer: Increase efficiency with automated text extraction and feedback loop

Businesses often rely on a variety of documents that can be hard to read; these documents are not always cleanly printed, and many include handwritten text. Businesses including Chevron use Form Recognizer to accelerate document processing through automatic information extraction from printed forms. This frees their employees to focus on more challenging and higher-value tasks.

Form Recognizer extracts key-value pairs, tables, and text from documents including W2 tax statements, oil and gas drilling well reports, completion reports, invoices, and purchase orders. Today we are announcing feedback loop support to enable even more accurate data extraction. Users will be able to provide labeled examples of the specific values they want extracted. This feature enables Form Recognizer to support any type of form including values without keys, keys under values, tilted forms, photos of forms, and more. Starting with just 10 forms, users can train a model tailored to their use case with high-quality results. A new user experience gets you started quickly, selects values of interest, labels, and trains your custom model.

sample UX tool showing the form labeling experience

In addition, Form Recognizer can now train a single model without labels for all the different types of forms, and supports training on large datasets and analyzing large documents with the new AsyncAPI. This benefit enables customers to train a single model for the different types of invoices, purchase orders, and more without the need to classify the documents in advance.

We have also enhanced our pre-built receipts capabilities with accuracy improvements, additional new fields for tips, receipt types (itemized, credit card slip, gas, parking, other), and line item extraction detailing all the different items in the receipt. Finally, we have also improved the accuracy of our text recognition enabling extraction of high-quality text from the forms and our table extraction.

Sogeti, part of Capgemeni, is harnessing these new Form Recognizer capabilities. As Arun Kumar Sahu, the Manager of AI ML for Sogeti notes:

“We are working on a document classification and predictive solution for one of the largest automobile auction companies in the US, and needed an efficient way to extract information from various automobile related documents (PDF or image). Form Recognizer was quick and easy to train and host, was cost effective, handled different document formats, and the output was amazing. The new labelling features made it very effective to customize key value pair extraction.”

Speech: Enable more natural interactions and accelerate productivity with advanced speech capabilities

Businesses want to be able to modernize and enable more seamless, natural interactions with their customers. Our latest advancements in speech allow customers to do just that.

At Microsoft Ignite 2018, we introduced our neural text-to-speech capability, which uses deep neural networks to enable natural-sounding speech and reduces listening fatigue for users interacting with AI systems. Neural text-to-speech can be used to make interactions with chatbots and virtual assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. We’re excited to build upon these advancements with the Custom Neural Voice capability, which enables customers to build a unique brand voice, starting from just a few minutes of training audio. The Custom Neural Voice capability can enable scenarios such as customer support provided by a company’s branded character, interactive lesson plans or guided museum tours, and voice assistive technologies. The capability also supports generating long-form content, including audiobooks.

The Beijing Hongdandan Education and Culture Exchange Center is dedicated to using audio to create accessible products for those with visual impairments and improving the lives of the visually impaired by providing aids such as audiobooks. Hongdandan is using the Custom Neural Voice capability to produce audiobooks based on the voice of Lina, who lost her sight at the age of 10. Lina is now a trainer at the Hongdandan Service Center, using her voice to teach others who are visually impaired to communicate well.

With the rapid pace at which business is moving today, remembering all the details from your last important meeting and tracking next steps and key deadlines can be a real challenge. Quickly and accurately transcribing calls can help various stakeholders stay on the same page by capturing critical details and making it easy to search and review topics you discussed. In customer support scenarios, being able to hear and understand your customers and keep an accurate record of information is critical for tracking customer requirements and enabling broader analysis.

However, accurately transcribing organization-specific terms like product names, technical terms, and people's names pose another barrier. With Custom Speech, you can tailor speech recognition models based on your own data so that your unique terms are accurately captured. Simply upload your audio to train a custom model. Now, you can also optimize speech recognition on your organization-specific terms by automatically generating custom models using your Office 365 data in a secure and compliant fashion. With this opt-in feature, organizations using Office 365 can more accurately transcribe company terminology, whether in internal meetings or on customer calls. The organization-wide language model is built only using conversations and documents from public groups that everyone in the organization can access.

Additional new features such as Custom Commands, Custom Speech and Voice containers, Speech Translation with automatic language identification, and Direct Line Speech channel integration with Bot Framework are making it easier to quickly embed advanced speech capabilities into your apps. For more information, visit the Azure Speech Services page.

Language: Extract deeper insights from customer feedback and text documents

There are a multitude of valuable customer insights captured today—whether in social media, customer reviews, or discussion forums. The challenge is being able to extract insights from that data, so businesses can act fast to improve customer service and meet the needs of the market. With the Text Analytics Sentiment Analysis capability, businesses can easily detect positive, neutral, negative, and mixed sentiment in content, enabling them to keep an ongoing pulse on customer satisfaction, better engage their customers, and build customer loyalty. The latest release of the Sentiment Analysis capability offers greater accuracy in sentiment scoring, as well as the ability to detect sentiment for both an entire document as well as individual sentences.

Another challenge of extracting information from your data is being able to take unstructured natural language text and identify occurrences of entities such as people, locations, organizations, and more. Text Analytics is expanding entity type support to more than 100 named entity types, making it easier than ever to extract meaningful information and analyze relationships from raw text and between terms. Additionally, customers will now be able to detect and extract more than 80 kinds of personally identifiable information in English language text documents.

We are also adding several new capabilities to Language Understanding Intelligent Service (LUIS) that enable developers to build sophisticated models that are conversational. The new capabilities provide the ability to handle more complex requests from users (as an example, if you want to allow customers to truly use natural language, they might order ‘Two Burgers with no onions and replace buns with lettuce wraps’). This provides customers with the advanced ability for hierarchical entities and model decomposition, to build more sophisticated language models that reflect the way humans speak. In addition, we are adding more regions and further enhancing the existing human languages supported in LUIS with the addition of Hindi and Arabic.

Enterprise Ready: Azure Virtual Network for enhanced data security

One of the most important considerations when choosing an AI service is security and regulatory compliance. Can you trust that the AI is being processed with the high standards and safeguards that you come to expect with hardened, durable software systems? Azure Cognitive Services offers over 70 certifications. Today we are offering Virtual Network support as part of Cognitive Services to ensure maximum security for sensitive data. This service also is being made available in a container that can run in a customer’s Azure subscription or on-premises.

Get started today

We are continuing to enable new powerful and intelligent scenarios for our customers that improve their productivity and user experiences. The incredible breadth of services available through Azure Cognitive Services enables you to extract insights from all your data. Using these new announcements, you can accurately extract text from forms using Form Recognizer, analyze and understand this text using Text Analytics and LUIS, and finally, provide these insights to your users through a spoken, conversational interface with our speech services.

These milestones illustrate our commitment to make the Azure AI platform suitable for every business scenario, with enterprise-grade tools that simplify application development and industry-leading security and compliance for protecting customers’ data.

Get started today by building your first intelligent application using an Azure free account and learn more about Cognitive Services.


Azure. Invent with purpose.

Secure software supply chain with Azure Pipelines artifact policies

$
0
0

We are announcing a preview capability for Azure Pipelines allowing you to define artifact policies that are enforced before deploying to critical environments such as production. You will be able to define custom policies that are evaluated against all the deployable artifacts in a given pipeline run and block the deployment if the artifacts don’t comply. At launch, we support container images and Kubernetes environments; support for other artifact types and target environment resources will be added in the next months.

Teams know how valuable and sensitive production environments are, and changes to it usually require following multiple checklists suggested by the security, operations and engineering teams, among others. However, sometimes the protocol is not fully respected; and when that happens, audit teams raise red flags, and after some root causing and soul searching, teams settle with an updated process that prevents the lapse from happening and most likely increasing their mean time to deliver (MTD).

At a high level, an application environment can be described as 3 layers: infrastructure, application platform, application.

Application layers

Each of these layers can be described as code and with their own set of best practices to imbibe. We’ll be focusing on the application layer in this article, which is updated most frequently and is the interface for external systems and users to interact with.

So, when it comes to checklist in the context of an application update, what are the usual suspects?

For example:

  • Allow application binaries from trusted sources
  • Application binaries has been built from trusted source control repository
  • Allow production deployment only when an application has been deployed and tested in a staging environment
  • Static analysis tools such as code coverage, lint have been run (with acceptable thresholds)
  • Application bits has run specific tests (with acceptable thresholds)
  • No known vulnerabilities found above severity level: medium
  • Green light from a custom tool

This is a supply chain problem where the goods (application artifact) to be delivered (to production system) goes through various waypoints (build, test, analysis) and the shipment is tracked in a waybill (record of where the artifact originated from, processes it’s been through). Not surprisingly, the term Software supply chain has been picking up in recent years. Let’s say we have an artifact with all the attributions related to the build, tests and other processes it’s been through, and there are policies defined by the teams collectively: can we now eliminate the manual intervention before production deployment? Here’s how artifact policies can be put to work.

The artifact policy is configured as a Check on an Environment.

Check configuration lets you specify custom policies to enforce; we have a set of examples to help you get started. The policy is evaluated with the Open Policy Agent.

After you define a policy, when a container image is built, tested or deployed, metadata is automatically attributed to the resulting artifact (the container image); you can even add custom metadata if desired. When an environment with the Artifact policy Check configured is running, custom policies are evaluated even before the deployment stage is run. As part of the evaluation, metadata for all the pipeline images (images either consumed via resources or built in any previous stages) is retrieved and the policy evaluated for all deployable image. If the images comply, they can be deployed, otherwise the pipeline halts with an error, as per this example:

And there it is, Azure Pipeline captured artifact’s provenance and provided a mechanism to enforce custom policies before they could be deployed to a production environment – instantly, without you having to manually validate them every single time!

You can learn more by looking at the artifact policy check documentation and reading about writing a custom policy. Additionally, you can check out sample policies.

If you have any feedback, get in touch by posting on our Developer Community or reaching out on Twitter at @AzureDevOps.

The post Secure software supply chain with Azure Pipelines artifact policies appeared first on Azure DevOps Blog.

.NET Core with Juypter Notebooks – Available today | Preview 1

$
0
0

When you think about Jupyter Notebooks, you probably think about writing your code in Python, R, Julia, or Scala and not .NET. Today we are excited to announce you can write .NET code in Jupyter Notebooks.

Try .NET has grown to support more interactive experiences across the web with runnable code snippets, interactive documentation generator for .NET core with dotnet try global tool, and now .NET in Jupyter Notebooks.

Build .NET Jupyter Notebooks

To get started with .NET Notebooks, you will need the following:

Please note: If you have the dotnet try global tool already installed, you will need to uninstall before grabbing the kernel enabled version.

  • Install the .NET kernel
    dotnet try jupyter install
  • Check to see if the .NET kernel is installed jupyter kernelspec list

kernelspec

  • To start a new notebook, you can either type jupyter lab Anaconda prompt or launch a notebook using the Anaconda Navigator.
  • Once Jupyter Lab has launched in your preferred browser, you have the option to create a C# or F# notebook.

Features

The initial set of features we released needed to be relevant to developers, with Notebook experience as well as give users new to the experience a useful set of tools they would be eager to try. Let’s have a look at some of the features we have enabled.

The first thing you will need to be aware of is when writing C# or F# in a .NET Notebook, you will be using C# Scripting or F# interactive.

You can either explore the features listed below locally on your machine or online using the dotnet/try binder image.

For the online documentation, please go to the Docs subfolder located in C# or F# folders.

List of features

Display output : There are several ways to display output in notebooks. You can use any of the methods demonstrated in the image below.

Object formatters : By default, the .NET notebook experience enables users to display useful information about an object in table format.

HTML output : By default .NET notebooks ship with several helper methods for writing HTML. From basic helpers that enable users to write out a string as HTML or output Javascript to more complex HTML with PocketView.

Importing packages : You can load NuGet packages using the following syntax:

#r "nuget:<package name>,<package version>"

For Example

# r "nuget:Octokit, 0.32.0"
# r "nuget:NodaTime, 2.4.6"
using Octokit; using NodaTime; 
using NodaTime.Extensions; 
using XPlot.Plotly; 

Charts with XPlot

Charts are rendered using Xplot.Plotly. As soon as users import XPlot.Plotly namespace into their notebooks(using Xplot.Ploty;), they can begin creating rich data visualizations in .NET.

Please check the .NET Notebook online for more documentation and samples.

.NET Notebooks perfect for ML .NET and .NET for Apache® Spark™

.NET notebooks bring iterative, interactive experiences popular in the worlds of machine learning and big data to .NET.

ML.NET

ML.NET with Jupyter Notebooks

.NET notebooks open up several compelling scenarios for ML.NET, like exploring and documenting model training experiments, data distribution exploration, data cleaning, plotting data charts, and learning.

For more details on how you can leverage ML.NET in Jupyter notebooks, check out this blog post on Using ML.NET in Jupyter notebooks. The ML.NET team has put together several online samples for you to get started with.

.NET for Apache® Spark™

Big Data for .NET

Having notebooks support is indispensable when you are dealing with Big data use cases. Notebooks allow data scientists, machine learning engineers, analysts, and anyone else interested in big data to prototype, run, and analyze queries rapidly.

So how can .NET developers and major .NET shops keep up with our data-oriented future? The answer is .NET for Apache Spark, which you can now use from within notebooks!

Today, .NET developers have two options for running .NET for Apache Spark queries in notebooks: Azure Synapse Analytics Notebooks and Azure HDInsight Spark + Jupyter Notebooks. Both experiences allow you to write and run quick ad-hoc queries in addition to developing complete, end-to-end big data scenarios, such as reading in data, transforming it, and visualizing it.

Option 1: Azure Synapse Analytics ships with out-of-the-box .NET support for Apache Spark (C#).

Option 2: Checkout the guide on the .NET for Apache Spark GitHub repo to learn how to get started with .NET for Apache Spark in HDInsight + Jupyter notebooks. The experience will look like the image below.

Get Started with the .NET Jupyter Notebooks Today!

.NET kernel brings interactive developer experiences of Jupyter Notebooks to the .NET ecosystem. We hope you have fun creating .NET notebooks. Please checkout our repo to learn more and let us know what you build.

The post .NET Core with Juypter Notebooks – Available today | Preview 1 appeared first on .NET Blog.

Announcing ML.NET 1.4 general availability (Machine Learning for .NET)

$
0
0

Coinciding with the Microsoft Ignite 2019 conference, we are thrilled to announce the GA release of ML.NET 1.4 and updates to Model Builder in Visual Studio, with exciting new machine learning features that will allow you to innovate your .NET applications.

ML.NET is an open-source and cross-platform machine learning framework for .NET developers. ML.NET also includes Model Builder (easy to use UI tool in Visual Studio) and CLI (Command-Line Interface) to make it super easy to build custom Machine Learning (ML) models using Automated Machine Learning (AutoML).

Using ML.NET, developers can leverage their existing tools and skillsets to develop and infuse custom ML into their applications by creating custom machine learning models for common scenarios like Sentiment Analysis, Price Prediction, Sales Forecast prediction, Customer segmentation, Image Classification and more!

Following are some of the key highlights in this update:

ML.NET Updates

In ML.NET 1.4 GA we have released many exciting improvements and new features that are described in the following sections.

Image classification based on deep neural network retraining with GPU support (GA release)

ML.NET, TensorFlow, NVIDIA-CUDA

This feature enables native DNN (Deep Neural Network) transfer learning with ML.NET targeting image classification.

For instance, with this feature you can create your own custom image classifier model by natively training a TensorFlow model from ML.NET API with your own images.

Image classifier scenario – Train your own custom deep learning model with ML.NET

Image Classification Training diagram

ML.NET uses TensorFlow through the low-level bindings provided by the Tensorflow.NET library. The advantage provided by ML.NET is that you use a high level API very simple to use so with just a couple of lines of C# code you define and train an image classification model. A comparable action when using the low level Tensorflow.NET library would need hundreds of lines of code.

The Tensorflow.NET library is an open source and low-level API library that provides the .NET Standard bindings for TensorFlow. That library is part of the open source SciSharp stack libraries.

The below stack diagram shows how ML.NET is implementing these new features on DNN training.

DNN stack diagram

As the first main scenario for high level APIs, we are currently providing image classification, but the goal in the future for this new API is to allow easy to use DNN training for additional scenarios such as object detection and other DNN scenarios in addition to image classification, by providing a powerful yet simple API very easy to use.

This Image-Classification feature was initially released in v1.4-preview. Now, we’re releasing it as a GA release plus we’ve added the following new capabilities for this GA release:

Improvements in v1.4 GA for Image Classification

The main new capabilities in this feature added since v1.4-preview are:

  • GPU support on Windows and Linux. GPU support is based on NVIDIA CUDA. Check hardware/software requisites and GPU requisites installation procedure here. You can also train on CPU if you cannot meet the requirements for GPU.

    • SciSharp TensorFlow redistributable supported for CPU or GPU: ML.NET is compatible with SciSharp.TensorFlow.Redist (CPU training), SciSharp.TensorFlow.Redist-Windows-GPU (GPU training on Windows) and SciSharp.TensorFlow.Redist-Linux-GPU (GPU training on Linux).
  • Predictions on in-memory images: You make predictions with in-memory images instead of file-paths, so you have better flexibility in your app. See sample web app using in-memory images here.

  • Training early stopping: It stops the training when optimal accuracy is reached and is not improving any further with additional training cycles (epochs).

  • Added additional supported DNN architectures to the Image Classifier: The supported DNN architectures (pre-trained TensorFlow model) used internally as the base for ‘transfer learning’ has grown to the following list:

    • Inception V3 (Was available in Preview)
    • ResNet V2 101 (Was available in Preview)
    • Resnet V2 50 (Added in GA)
    • Mobilenet V2 (Added in GA)

Those pre-trained TensorFlow models (DNN architectures) are widely used image recognition models trained on very large image-sets such as the ImageNet dataset and are the culmination of many ideas developed by multiple researchers over the years. You can now take advantage of it now by using our easy to use API in .NET.

Example code using the new ImageClassification trainer

The below API code example shows how easily you can train a new TensorFlow model.

Image classifier high level API code example:

// Define model's pipeline with ImageClassification defaults (simplest way)
var pipeline = mlContext.MulticlassClassification.Trainers
      .ImageClassification(featureColumnName: "Image",
                            labelColumnName: "LabelAsKey",
                            validationSet: testDataView)
   .Append(mlContext.Transforms.Conversion.MapKeyToValue(outputColumnName: "PredictedLabel",
                                                         inputColumnName: "PredictedLabel"));

// Train the model
ITransformer trainedModel = pipeline.Fit(trainDataView);

The important line in the above code is the line using the ImageClassification classifier trainer which as you can see is a high level API where you just need to provide which column has the images, the column with the labels (column to predict) and a validation dataset to calculate quality metrics while training so the model can tune itself (change internal hyper-parameters) while training.

There’s another overloaded method for advanced users where you can also specify those optional hyper-parameters such as epochs, batchSize, learningRate and other typical DNN parameters, but most users can get started with the simplified API.

Under the covers this model training is based on a native TensorFlow DNN transfer learning from a default architecture (pre-trained model) such as Resnet V2 50. You can also select the one you want to derive from by configuring the optional hyper-parameters.

For further learning read the following resources:

Database Loader (GA Release)

Database Loader diagram

This feature was previously introduced as preview and now is released as general availability in v1.4.

The database loader enables to load data from databases into the IDataView and therefore enables model training directly against relational databases. This loader supports any relational database provider supported by System.Data in .NET Core or .NET Framework, meaning that you can use any RDBMS such as SQL Server, Azure SQL Database, Oracle, SQLite, PostgreSQL, MySQL, Progress, etc.

In previous ML.NET releases, you could also train against a relational database by providing data through an IEnumerable collection by using the LoadFromEnumerable() API where the data could be coming from a relational database or any other source. However, when using that approach, you as a developer are responsible for the code reading from the relational database (such as using Entity Framework or any other approach) which needs to be implemented properly so you are streaming data while training the ML model, as in this previous sample using LoadFromEnumerable().

However, this new Database Loader provides a much simpler code implementation for you since the way it reads from the database and makes data available through the IDataView is provided out-of-the-box by the ML.NET framework so you just need to specify your database connection string, what’s the SQL statement for the dataset columns and what’s the data-class to use when loading the data. It is that simple!

Here’s example code on how easily you can now configure your code to load data directly from a relational database into an IDataView which will be used later on when training your model.

//Lines of code for loading data from a database into an IDataView for a later model training
//...
string connectionString = @"Data Source=YOUR_SERVER;Initial Catalog= YOUR_DATABASE;Integrated Security=True";

string commandText = "SELECT * from SentimentDataset";

DatabaseLoader loader = mlContext.Data.CreateDatabaseLoader();
DbProviderFactory providerFactory = DbProviderFactories.GetFactory("System.Data.SqlClient");
DatabaseSource dbSource = new DatabaseSource(providerFactory, connectionString, commandText);

IDataView trainingDataView = loader.Load(dbSource);

// ML.NET model training code using the training IDataView
//...

public class SentimentData
{
    public string FeedbackText;
    public string Label;
}

It is important to highlight that in the same way as when training from files, when training with a database ML.NET also supports data streaming, meaning that the whole database doesn’t need to fit into memory, it’ll be reading from the database as it needs so you can handle very large databases (i.e. 50GB, 100GB or larger).

Resources for the DatabaseLoader:

PredictionEnginePool for scalable deployments released as GA

WebApp iconAzure Function icon

When deploying an ML model into multithreaded and scalable .NET Core web applications and services (such as ASP.NET Core web apps, WebAPIs or an Azure Function) it is recommended to use the PredictionEnginePool instead of directly creating the PredictionEngine object on every request due to performance and scalability reasons.

The PredictionEnginePool comes as part of the Microsoft.Extensions.ML NuGet package which is being released as GA as part of the ML.NET 1.4 release.

For further details on how to deploy a model with the PredictionEnginePool, read the following resources:

For further background information on why the PredictionEnginePool is recommended, read this blog post.

Enhanced for .NET Core 3.0 – Released as GA

.NET Core 3.0 icon

ML.NET is now building for .NET Core 3.0 (optional). This feature was previosly released as preview but it is now released as GA.

This means ML.NET can take advantage of the new features when running in a .NET Core 3.0 application. The first new feature we are using is the new hardware intrinsics feature, which allows .NET code to accelerate math operations by using processor specific instructions.

Of course, you can still run ML.NET on older versions, but when running on .NET Framework, or .NET Core 2.2 and below, ML.NET uses C++ code that is hard-coded to x86-based SSE instructions. SSE instructions allow for four 32-bit floating-point numbers to be processed in a single instruction.

Modern x86-based processors also support AVX instructions, which allow for processing eight 32-bit floating-point numbers in one instruction. ML.NET’s C# hardware intrinsics code supports both AVX and SSE instructions and will use the best one available. This means when training on a modern processor, ML.NET will now train faster because it can do more concurrent floating-point operations than it could with the existing C++ code that only supported SSE instructions.

Another advantage the C# hardware intrinsics code brings is that when neither SSE nor AVX are supported by the processor, for example on an ARM chip, ML.NET will fall back to doing the math operations one number at a time. This means more processor architectures are now supported by the core ML.NET components. (Note: There are still some components that don’t work on ARM processors, for example FastTree, LightGBM, and OnnxTransformer. These components are written in C++ code that is not currently compiled for ARM processors).

For more information on how ML.NET uses the new hardware intrinsics APIs in .NET Core 3.0, please check out Brian Lui’s blog post Using .NET Hardware Intrinsics API to accelerate machine learning scenarios.

Use ML.NET in Jupyter notebooks

Jupyter and MLNET logos

Coinciding with Microsoft Ignite 2019 Microsoft is also announcing the new .NET support on Jupyter notebooks, so you can now run any .NET code (C# / F#) in Jupyter notebooks and therefore run ML.NET code in it as well! – Under the covers, this is enabled by the new .NET kernel for Jupyter.

The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, visualizations and narrative text.

In terms of ML.NET this is awesome for many scenarios like exploring and documenting model training experiments, data distribution exploration, data cleaning, plotting data charts, learning scenarios such as ML.NET courses, hands-on-labs and quizzes, etc.

You can simply start exploring what kind of data is loaded in an IDataView:

Exploring data in Jupyter

Then you can continue by plotting data distribution in the Jupyter notebook following an Exploratory Data Analysis (EDA) approach:

Plotting in Jupyter

You can also train an ML.NET model and have its training time documented:

Training in Jupyter

Right afterwards you can see the model’s quality metrics in the notebook, and have it documented for later review:

Metrics in Jupyter

Additional examples are ‘plotting the results of predictions vs. actual data’ and ‘plotting a regression line along with the predictions vs. actual data’ for a better and visual analysis:

Jupyter additional

For additional explanation details, check out this detailed blog post:

For a direct “try it out experience”, please go to this Jupyter notebook hosted at MyBinder and simply run the ML.NET code:

Updates for Model Builder in Visual Studio

The Model Builder tool for Visual Studio has been updated to use the latest ML.NET GA version (1.4 GA) plus it includes new exciting features such the visual experience in Visual Studio for local Image Classification model training.

Release date. Note that at the time of this blog post publication this version of Model Builder is still not released but will be released very soon in just a few days after Microsoft Ignite and the release of ML.NET 1.4 GA.

Model Builder updated to latest ML.NET GA version

Model Builder was updated to use latest GA version of ML.NET (1.4) and therefore the generated C# code also references ML.NET 1.4 NuGet packages.

Visual and local Image Classification model training in VS

As introduced at the begining of this blog post you can locally train an Image Classification model with the ML.NET API. However, when dealing with image files and image folders, the easiesnt way to do it is with a visual interface like the one provided by Model Builder in Visual Studio, as you can see in the image below:

Model Builder in VS

When using Model Builder for training an Image Classifier model you simply need to visually select the folder (with a structure based on one sub-folder per image class) where you have the images to use for training and evaluation and simply start training the model. Then, when finished training you will get the C# code generated for inference/predictions and even for training if you want to use C# code for training from other environments like CI pipelines. Is that easy!

Try ML.NET and Model Builder today!

ML.NET logo

We are excited to release these updates for you and we look forward to seeing what you will build with ML.NET. If you have any questions or feedback, you can ask here at this blog post or at the ML.NET repo at GitHub.

Happy coding!

The ML.NET team.

This blog was authored by Cesar de la Torre plus additional contributions of the ML.NET team.

The post Announcing ML.NET 1.4 general availability (Machine Learning for .NET) appeared first on .NET Blog.


A Sprint Burndown widget with everything you’ve been asking for

$
0
0

With Sprint 160, we are releasing a new Sprint Burndown widget that lets you choose how to burndown for a sprint.

Azure DevOps Sprint Burndown widget - configured by Count of Tasks

You can burndown by Story Points, count of Tasks, or custom fields. You can create a burndown for Epics, Features, and Stories. In fact, you can burndown by summing any field or by counting any type of work item. The new widget displays average burndown, % complete, and scope increase. You can choose to burndown on a specific team, which lets you display sprint burndowns for multiple teams on the same dashboard. With all this great information to display, we let you resize it up to 10×10 on the dashboard

Azure DevOps Sprint Burndown widget - Configuration

You’ll notice that we have two versions in the widget catalog.

Azure DevOps Sprint Burndown widget - Widget Catalog

The new version requires access to Analytics. Since some customers restrict “View Analytics” permissions, we kept the legacy version as a backup option for them.

To try the new version, you can add it from the widget catalog. Or, you can edit the configuration of an existing legacy Sprint Burndown widget and check the Try the new version now box.

Azure DevOps Sprint Burndown widget - Easy upgrade to new version

For more information, check out our docs.

We know many of you have been asking for these features for a while. We are so happy to provide them. Enjoy!

The post A Sprint Burndown widget with everything you’ve been asking for appeared first on Azure DevOps Blog.

New R Support in Azure Machine Learning

$
0
0

Azure Machine Learning has added support for the R language, it was announced at the Ignite conference in Orlando this week.

A new R package azuremlsdk (available to install from Github now, and from CRAN soon), provides the interface to the Azure Machine Learning service. With R functions, you can provision new computing clusters in Azure, and use those to train models with R and deploy them as prediction endpoints for use from any app. You can also launch R-based notebooks in the new Azure Machine Learning studio web interface, or even launch a complete RStudio server instance on your cloud computing resources. Azure Machine Learning service supports the latest version of R (3.6.1) and all R packages (from CRAN, Github, or elsewhere). The video below from The AI Show demonstrates how it all works:

Azure Machine Learning is also great for teams that have both Python and R expertise. You can even call Python models from R (and vice-versa): in this Ignite 2019 talk (presented by me and Daniel Schneider) we deploy R and Python function as a container services, and call them both from a Shiny app. You can also find the slides and associated code from that talk in this Github repository.

To get started with R in Azure Machine Learning, a good place to start is the tutorial "Train and deploy your first model in R with Azure Machine Learning". If you need an Azure subscription, use this link to sign up for Azure and get $200 in free Azure credits.

Azure Machine Learning: ml.azure.com

Accelerating customer success with Azure migration

$
0
0

This blog post was co-authored by Jeremy Winter, Partner Director and Tanuj Bansal, Senior Director for Microsoft Azure.

At last year's Microsoft Ignite 2018, we shared best practices on how to move to the cloud and why Azure is the best destination for all your apps, data, and infrastructure. Since then, we’re happy to share that a number of customers have joined us on Azure—H&R Block, Albertsons, Devon Energy, and Carlsberg Group, just to name a few. Azure has helped these customers drive innovation, enhance their security posture, and reduce costs with unique offers such as Azure Hybrid Benefit.

At this week’s Microsoft Ignite event in Orlando, we shared the approach these customers took and more news in Azure migration sessions and one-on-one architecture review sessions with Azure engineers.

In this blog, we want to share some of the exciting news we shared at Microsoft Ignite.

Accelerating customer success: Azure Migration Program (AMP)

Since its launch in July, AMP has seen an enthusiastic reception with more than a thousand customers entering the program for migration projects ranging across Windows Server, SQL Server, and Linux workloads. To recap, AMP offers customers:

“We are on a multi-year transformation journey, and cloud migration is an important first step. Azure Migration Program offered the right mix of training, best practice guidance, tooling, and specialized partners to best meet our needs. Importantly, Microsoft was prepared to work hand in hand with us and showed deep commitment to our success.”

- Marc Gunter, Vice President of Infrastructure, Planning and Engineering, Canadian Imperial Bank of Commerce, CIBC

AMP engagements begin by asking and addressing questions on organizational leadership rather than around technology or product. For example:

  • Have you identified an executive sponsor?
  • Have you identified your business, application, and IT team participants?
  • Have you developed a business case with an initial assessment of your on-premises estate and a total cost of ownership (TCO) analysis?
  • Have you identified a partner to help you with migration?

Ultimately, the answers to these questions form the basis of a robust migration plan. To help accelerate this step, customers can now use the new self-serve tool, Strategic Migration Assessment & Readiness Tool (SMART). More details are available in this whitepaper.

Visualizing the Azure migration readiness cycle

Check out this video to learn more about Azure Migration Program and apply today. Get prescriptive self-serve guidance at Azure migration center.

New Azure Migrate capabilities–your hub for all things migration

In parallel with our Azure Migration Program efforts, we’ve continued investing in product innovation to improve the migration experience for customers. Azure Migrate is a one-stop hub for all your migration needs across applications, infrastructure, and data; delivering a simplified, end-to-end migration experience, with a choice of Microsoft and partner tools.

Building on our July release, we're excited to announce support for new migration scenarios and several new capabilities described below.

Application migration

Many of you run .NET web applications on-premises that address internal line-of-business and customer-facing scenarios. Based on your feedback, we have streamlined and automated the Azure migration journey for these applications. Azure Migrate now integrates with App Service Migration Assistant to provide a comprehensive experience for migrating .NET applications to Azure App Service. 
   Visualizing the App Service Migration Assistant dashboard

New Infrastructure Migration for virtual desktop infrastructure (VDI)

Your organization may require a virtualized desktop experience for reasons like meeting compliance regulations, securing access to sensitive data, and managing access to corporate data and apps for a mobile workforce. Windows Virtual Desktop provides the best-virtualized Office and Windows experience on Azure. We have integrated with Lakeside, a Microsoft partner, to enable assessment of on-premises virtual desktops for migration to Windows Virtual Desktop (WVD) on Azure.

New Server Assessment and Migration Capabilities

Since our acquisition of Movere, we have been hard at work integrating its capabilities into our toolsets. We're pleased to announce that this work is now complete—customers can now consume Movere’s innovative discovery and assessment capabilities from Azure Migrate.

We're also announcing discovery of on-premises physical servers, in addition to the existing VMware, and Hyper-V support.

Server Assessment now also provides application discovery capabilities, giving you visibility into the applications installed, their roles, features, and versions enabled on your on-premises virtual machines, which will help you identify the right migration path for each underlying workload. Application discovery is now available for VMware virtual machines.

Many of you have been using the dependency visualization capability to identify all the components that make up your application along with their interdependencies. We have now enabled agentless dependency visualization for VMware virtual machines, currently in preview.

Agentless server migration for VMware virtual machines has also graduated from preview to general availability. 

We have significantly streamlined the process of uploading configuration and performance data of your on-premises servers into Azure Migrate. Now, you can simply use CSV import-based discovery to upload virtual machine configuration and performance details in CSV format. Once the server inventory is uploaded, you can then create assessments on the imported data without having to do appliance-based discovery.

Get started with Azure Migrate, learn more from our documentation, and try our preview features. Visit our UserVoice forum if you would like to provide feedback or learn more about our roadmap.


Azure. Invent with purpose.

Re-imagining collaboration for Visual Studio with Live Share app casting and contacts

$
0
0

The power of Visual Studio for desktop and mobile development is unmatched in the industry, and we wanted to ensure that the best in class also had the best collaboration story. Live Share is reimagining this collaboration story by reducing the barriers to collaboration, increasing the fidelity of the collaboration experience while building desktop apps, and enhancing this workflow.

One of the barriers to collaboration for Visual Studio desktop, mobile and console application development was the inability to effectively share your progress while working on an app with your peer. With the VS16.4 release you will now be able to share your application from within a collaboration session.  With the cumbersome process of creating and sharing links to start a collaboration session. collaboration did not feel as intuitive. To solve this problem and make collaboration as low-touch as possible, we now have contacts in Live Share that are auto-populated with your recent and contextual collaborators, who can be directly invited to a collaboration session. With all of these new changes, we have also enhanced the interactiveness of a Live Share session with in-built audio calling.

 

Application casting with contacts can enhance your collaboration workflow, whether it is for your scheduled pairing session, or for debugging a bug with someone with expertise on your team. You don’t need to lose the comfort of your IDE to make progress on blockers in your code. We know that good code takes multiple eyes on it, and with direct invitations to your contacts you easily collaborate with your team.

Getting started with app casting and contacts

To use Live Share with app casting and add contacts, make sure you have Visual Studio 16.4 or higher. Once you have this version of Visual Studio, your Live Share extension will come with app casting when you choose to be an insider. To become an Insider, go to: Tools > Options > Live Share > General > Features and set it to Insiders, as seen in the screenshot below.

With Insiders enabled, you will receive all the coolest new features of Live Share. Live Share is now enabled with not just app casting and contacts, but also VS Live Share Audio. You can jump on a quick call from within a Live Share session without context switching to any other application, thereby extending  your coding productivity time.

Directly invite your peers

Contacts will appear automatically once you are an Insider under your contacts pane and are under two categories,

  1. Recent Contacts

These are developers you have previously collaborated with using Live Share. In practice, most developers frequently collaborate with the same people, and therefore, the recent list enables a more repeatable means of working with your team/classroom/etc.

  1. Suggested Contacts

These are developers that have contributed to your currently open project within the last 30 days. In practice, these are the folks you are likely to want to collaborate with, and therefore, we suggest them in order to make it easier to get started.

All your contacts can be invited directly to a Live Share session from within your editor. They’ll get a toast notification that gives them the option to join the session or not. This removes the need to exchange session URLs entirely.

Share your status

With contacts, comes the ability to signal your availability for collaboration. Live Share contacts allow you to set your status to Available, Do Not Disturb, Away or Offline. The idea is to provide you the ability to choose the level of interaction you would like to have with your peers without the need to context switch. Its not only easy to directly invite contacts now, but also to let them know that you are not available to collaborate. You can read up more on how contacts and statuses work here.

Just hit F5

To share the desktop app, you are working on with your peer from within a Live Share session, just start a debug session with F5 . The screenshot below shows an Expense Reporting WPF application being worked on during a Live Share session.

When the host of the session presses f5 to start a debugging session, the app auto launches, and the guest can view the application on their side as well. All participants in the session can interact with the application (and  ) and modify it together without committing any changes .

App casting currently works for UWP, WinForms, Win32 C++ apps, C++ and CMake console apps with many more to come!

Call from within your IDE

Now you have app casting working and can share your entire working picture with your peer, but sometimes you really need to talk over the fine details. For this, Live Share has in-built audio calling from within your session! The ability to do an audio call from within your IDE allows you to be productive without context switching out of your focus mode while developing.

Let us know what you think!

With app casting your debugging sessions can be a powerful place to do real-time collaboration and make progress on hard bugs. With direct invitations and status sharing with contacts you now have a new ease to your collaboration process.

We love hearing from you, so tell us what you think about this new feature, and how else you plan to use it alongside audio calling, by leaving feedback here.

You can follow Live Share’s newest offerings through our GitHub release notes, and file for feature requests to let us know what you would like to see us offer next.

The post Re-imagining collaboration for Visual Studio with Live Share app casting and contacts appeared first on Visual Studio Blog.

What’s new with Azure Monitor

$
0
0

At Microsoft Ignite 2018, we shared our vision to bring together infrastructure, application, and network monitoring into one unified offering, and provide full-stack monitoring for your applications. We have since made rapid strides towards delivering that reality to our customers. From consolidating our logs, metrics and alerts platforms, and integrating existing capabilities such as Application Insights and Log Analytics, to adding new monitoring capability containers and virtual machines, and contributing back to the community through open-source projects such as OpenTelemetry. In this blog, I'll share the newest enhancements from Azure Monitor at Microsoft Ignite, including four examples of how we continue to build seamless, and integrated monitoring solution that works well for cloud-native and legacy workloads and is cost-effective. Be sure to read the full blog post to get a list of all the exciting enhancements.

Monitor containers anywhere

Customers love the convenience of the out of the box monitoring that Azure Monitor for containers provides for all their Azure Kubernetes Service (AKS) clusters. But, you also have Kubernetes clusters running outside AKS. For customers who have hybrid environments, we are now launching the ability to monitor Kubernetes clusters on-premises and on Azure Stack (with AKS Engine) in preview. Just install the container agent and you can create alerts and get insights into the performance of your on-premises workloads in the Azure portal, along with your AKS workloads. Learn more about hybrid Kubernetes monitoring.

Azure Monitor Containers

We are also making the popular Prometheus integration generally available. Azure Monitor can now scrape your Prometheus metrics and store them on your behalf, without you having to operate your own Prometheus collection and storage infrastructure. We also have new Grafana templates for you to visualize all the performance data that is collected from your Kubernetes clusters. Learn more about the Prometheus integration and Grafana templates.

Azure Monitor Containers

Troubleshooting network issues faster

Monitoring a typical cloud network containing application gateways, VPN connections, virtual networks, etc., is a time-consuming activity. To troubleshoot an issue, you need to know the specific networking resources that support your application and scan for the health of these resources across multiple subscriptions and resource groups.

The Network Insights preview in Azure Monitor provides a single dashboard that gives you visibility into network topology, dependencies, health, and other key metrics for related network resources. The insights are derived from data that’s available in Azure Monitor today, so no additional setup or configuration is required.

Azure monitor network

With Network Insights, you have visibility into the health of your network across all of your subscriptions. Intuitive search and detailed topology maps enable faster drill-downs, help localization of networking issues, and suggest remediation in a matter of minutes. Learn more about Network Insights.

Work better and collaborate with workbooks

We've gotten great feedback from customers on Azure Monitor workbooks because it gives you a single tool that can combine text, analytic queries, metrics, and parameters into a rich interactive report that you can share with your team members and collaborate.

Azure Monitor workbook

We have seen customers use workbooks in several ways including exploring the usage of an app, going through a root cause analysis, putting together an operational playbook, and more. We are now making workbooks generally available. Since the launch in preview, we have added support for a number of new data sources, including Azure Data Explorer, Azure Resource Graph, Azure Monitor Logs, Metrics, Alerts, etc., and added visualization options such as charts, grids, tiles, honeycombs, and maps. The Azure Monitor Workbook platform now forms the basis of new monitoring experiences in Azure services such as Azure Sentinel, Storage accounts, Azure Cosmos DB, Azure Active Directory, and SAP Hana. Learn more about Azure Monitor workbooks.

In addition to the highlights of the innovation that we are driving above, here are even more detailed new capabilities we're delivering today:

  •  New agent and additions to profiling and tracing capabilities in Application Insights: For customers who have ASP.NET applications hosted on Azure Virtual Machines (VMs) running IIS, we are adding a new “codeless” onboarding method that uses an agent and does not require access to the code. Learn more.
    • We've added the ability to specify central processing units and memory thresholds for the Application Insights Profiler, so you have better control of when to collect traces. Learn more.
    • We've also added a source code view (via decompilation) in Application Insights Snapshot Debugger to allow you to quickly diagnose the failing code.
  •  Application change analysis enhancements: We have added a lot of features for application change analysis to help you scale. We have introduced the ability to turn on application change analysis at an App Services plan level, you can now see resource manager changes for any resource, and there are richer diagnostics for common scenarios (such as VMs + VNET, SQL server, and Storage). We also added an impact analysis feature to see downstream dependencies for a change and revamped the user experience. Learn more.
  •  Traffic Analytics accelerated processing: The new accelerated processing option in Traffic Analytics allows you to process NSG Flow logs at 10-minute intervals. Learn more.
  •  Live container metrics and live deployments (preview): We are adding the ability to see live performance metrics and live deployments in your AKS cluster. Together with the live events and live logs features, you can get a near real-time performance and health view of your AKS cluster and troubleshoot issues faster.
  •  Log integrations: Using the new Subscription Diagnostic settings, you can now stream every type of activity log for your subscription to Azure Monitor Logs, Event Hub, and Storage and no longer need Subscription Log Profiles or Log Analytics Activity Log connector. In addition, you can now export log data from services such as Azure App Services and Azure Storage accounts directly to Azure Monitor. These features are available for free while in preview.
  •  Azure Monitor for Cosmos DB: You can now view usage, failures, capacity, throughput, and operations for your Azure Cosmos DBs across your subscriptions.  You can see the rollups at subscription, Azure Cosmos DB level or the individual container level and then drill through to the resource for further troubleshooting.

Our customer feedback has been instrumental in shaping these features, and we hope you'll keep the feedback coming. If you have any questions or suggestions, reach out to our Tech Community forum.


Azure. Invent with purpose.

Delivering increased productivity for bot development and deployment

$
0
0

Over the past few years, we have seen many examples of organizations applying conversational AI in meaningful ways. Accenture and Caesars Entertainment are making their employees more productive with enterprise bots. UPS and Asiana Airlines are using bots to deliver better customer service. And finally, BMW and LaLiga have built their own branded voice assistants, taking control of how customers experience their brand. These are just a few of the organizations that have built conversational AI solutions with Azure AI.

This week at Microsoft Ignite, we announced updates to our products to make it easier for organizations to build robust conversational solutions, and to deploy them wherever their customers are. We are sharing some of the highlights below.

Most popular open source SDK for accelerated bot development

We announced the release of Bot Framework SDK 4.6 making it easier for developers to build enterprise-grade conversational AI experiences. Bot Framework includes a set of open source SDKs and tools for bot development, and can easily integrate with Azure Cognitive Services, enabling developers to build bots that can speak to, listen to, and understand users. 

  • Bot Framework SDK for Microsoft Teams. Developers can build Teams bots with built-in support for Teams messaging extensions, proactive messaging and notifications, and more.
  • Bot Framework SDK Skills (preview). Developers can create a reusable conversational skill and also leverage pre-built skills which come with the language models, dialogs, QnA, and integration code. Pre-built skills include calendar, email, task, point of interest, weather, news, and more.
  • Adaptive Dialog (preview). enabling developers to build conversations that can dynamically handle interruptions and context switching.
  • Language Generation (preview). enabling developers to define multiple variations on a phrase, execute simple expressions based on context.
  • Python and Java updates (preview). enabling developers to build in the language of their choice.

“We compared several options, and in terms of flexibility, time-to-market, and interoperating with our existing infrastructure, we felt that Microsoft Bot Framework was by far the best way to go.” - Cole Dutcher, Associate Director of Engineering, Jet.com/Walmart Labs

Simplifying bot building with a low-code visual experience

In order to help developers get started quickly with bot development, we offered bot code samples and templates. To make it even easier to get started, we have launched Bot Framework Composer (preview), an integrated development tool built on top of the Bot Framework SDK. Rather than having to start coding completely from scratch, developers can start with Composer, a low-code visual experience enabling developers to create, edit, test, and refine conversational apps (bots), with the flexibility to extend the bot with custom code. Developers can also have one centralized place to incorporate Azure Cognitive Services, starting initially with language understanding, with more Cognitive Services to come.

Like Bot Framework, the Composer is an open source project. Get started and build a bot today.

BF Composer image

Connecting your bot to your users

Azure Bot Service allows you to take the bot you’ve built using Bot Framework, and host it easily in Azure so you can connect your bot to your users across popular channels like Facebook, Slack, Teams, or even your own websites—wherever your customers are. We’ve announced some new integrations:

  • Direct Line Speech. Voice-first conversational experiences continue to grow in popularity and importance, and today announcing the general availability of the Direct Line Speech. Direct Line Speech is a new channel that simplifies the creation of end-to-end conversational solutions for streamed speech and text bi-directionally from the client to the bot application using WebSockets on Azure Bot Service. Get started with this the step-by-step tutorial.
  • New integrations and adapters. As Bot Framework adoption increases, so does the demand for integrations to new channels. We are pleased to announce a new integration with LivePerson, the provider of conversational platform LiveEngage, so that developers can build customer care scenarios that can escalate conversations to human agents. We also added a new WeChat adapter. If you are interested in connectors to other platforms, learn more about our additional channels and adapters. You’ll find we have a growing list of Bot Framework adapters, including community adapters for platforms like WebEx Teams, Google Hangouts, Google Assistant, Twitter, and Amazon Alexa.
  • Direct Line App Service Extension (preview). Bots using Web Chat and Direct Line can now be isolated from other traffic on the Bot Service by running Direct Line on a dedicated Azure App Service. This also allows bots to participate in Azure VNET configurations. A VNET allows developers to create their own private space in Azure and is crucial to their cloud network as it offers isolation, segmentation, and other key benefits. Learn more about this capability.

“We used Microsoft Azure Bot Service and Cognitive Services to help cope with the complexity of launching Aura in six countries on four separate channels—and do it all seamlessly.” - Chema Alonso, Chief Data Officer, Telefonica

Integration with Azure Cognitive Services (Language Understanding, QnA Maker)

One of the key benefits of using Bot Framework and Azure Bot Service is the ability to also integrate powerful domain-specific AI models using Azure Cognitive Services. We made several new announcements for Azure Cognitive Services, such as new Speech capability like Custom Neural Voice, which allows users to create personalized voices.

Language Understanding, an Azure Cognitive Service, introduced new capabilities that enable developers to handle even more sophisticated language structures. These capabilities can better parse complex sentence structures into a hierarchical structure and better understand natural language. It also offers a new UI experience that enables you to more easily benefit from these new models by labeling sub-components and overlapping entities within the user interface. Finally, Language Understanding service now supports Hindi and Arabic, expanding its coverage of languages.

QnA Maker is a cloud-based API service that creates a conversational, question-and-answer layer over your data. With QnA Maker, you can build, train, and publish a simple question and answer bot based on FAQ URLs, structured documents, product manuals or editorial content in minutes. QnA Maker’s multi-turn capability is now generally available, so you can create multi-turn QnA bots without having to write any code. In addition, QnA Maker is also going big on global requirements with increased region availability and ranking feature support for 10 languages, and chit-chat support allows you to handle small talk for 8 languages. Finally, Qna Maker also supports batch testing to quickly test knowledge base with a standard set of test cases and validate the quality of the answers.

“By using Microsoft Azure Bot Service and Cognitive Services, we’ve been able to continue our own Progressive journey of digital innovation and do it in an agile, fast, and cost-effective way.” - Matt White, Marketing Manager, Personal Lines Acquisition Experience, Progressive Insurance

Power Virtual Agents

At Microsoft Ignite, we also announced Power Virtual Agents, a UI-based bot building experience built on the Bot Framework and made available on the Power Platform. It is designed for business users who are looking for a code-free bot building experience. Bots built using Power Virtual Agents can be extended by developers using the Bot Framework SDK and tools. In fact, this allows collaboration between business users who have subject matter expertise, and developers who have the technical expertise to build custom conversational experiences.

Get started

With these enhancements, we are delivering value across the entire Microsoft Bot Framework SDKs and tools, Language Understanding, and QnA Maker in order to help developers become more productive in building a variety of conversational experiences.

We look forward to seeing what conversational experiences you will build for your customers.

Get started today.


Azure. Invent with purpose.

Azure SQL Data Warehouse is now Azure Synapse Analytics

$
0
0

On November fourth, we announced Azure Synapse Analytics, the next evolution of Azure SQL Data Warehouse. Azure Synapse is a limitless analytics service that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources—at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs.

With Azure Synapse, data professionals can query both relational and non-relational data using the familiar SQL language. This can be done using either serverless on-demand queries for data exploration and ad hoc analysis or provisioned resources for your most demanding data warehousing needs. A single service for any workload.

In fact, it’s the first and only analytics system to have run all the TPC-H queries at petabyte-scale. For current SQL Data Warehouse customers, you can continue running your existing data warehouse workloads in production today with Azure Synapse and will automatically benefit from the new preview capabilities when they become generally available. You can sign up to preview new features like serverless on-demand query, Azure Synapse studio, and Apache Spark™ integration.

  A diagram showing how Azure Synapse Analytics connects Power BI, Azure Machine Learning, and your ecosystem.

Taking SQL beyond data warehousing

A cloud native, distributed SQL processing engine is at the foundation of Azure Synapse and is what enables the service to support the most demanding enterprise data warehousing workloads. This week at Ignite we introduced a number of exciting features to make data warehousing with Azure Synapse easier and allow organizations to use SQL for a broader set of analytics use cases.

Unlock powerful insights faster from all data

Azure Synapse deeply integrates with Power BI and Azure Machine Learning to drive insights for all users, from data scientists coding with statistics to the business user with Power BI. And to make all types of analytics possible, we’re announcing native and built-in prediction support, as well as runtime level improvements to how Azure Synapse handles streaming data, parquet files, and Polybase. Let’s dive into more detail:

  • With the native PREDICT statement, you can score machine learning models within your data warehouse—avoiding the need for large and complex data movement. The PREDICT function (available in preview) relies on open model framework and takes user data as input to generate predictions. Users can convert existing models trained in Azure Machine Learning, Apache Spark™, or other frameworks into an internal format representation without having to start from scratch, accelerating time to insight.

A diagram showing how you can create and upload models to score them with SQL Analytics in Data Warehouse.

  • We’ve enabled direct streaming ingestion support and ability to execute analytical queries over streaming data. Capabilities such as: joins across multiple streaming inputs, aggregations within one or more streaming inputs, transform semi-structured data and multiple temporal windows are all supported directly in your data warehousing environment (available in preview). For streaming ingestion, customers can integrate with Event Hubs (including Event Hubs for Kafka) and IoT Hubs.

  • We’re also removing the barrier that inhibits securely and easily sharing data inside or outside your organization with Azure Data Share integration for sharing both data lake and data warehouse data.

  • By using new ParquetDirect technology, we are making interactive queries over the data lake a reality (in preview). It’s designed to access Parquet files with native support directly built into the engine. Through improved data scan rates, intelligent data caching and columnstore batch processing, we’ve improved Polybase execution by over 13x.

A graph showing the performance improvement with ParquetDirect.

Workload isolation

To support customers as they democratize their data warehouses, we are announcing new features for intelligent workload management. The new Workload Isolation functionality allows you to manage the execution of heterogeneous workloads while providing flexibility and control over data warehouse resources. This leads to improved execution predictability and enhances the ability to satisfy predefined SLAs.

An image showing workload isolation in Data Warehouse.

COPY statement

Analyzing petabyte-scale data requires ingesting petabyte-scale data. To streamline the data ingestion process, we are introducing a simple and flexible COPY statement. With only one command, Azure Synapse now enables data to be seamlessly ingested into a data warehouse in a fast and secure manner.

This new COPY statement enables using a single T-SQL statement to load data, parse standard CSV files, and more.

COPY statement sample code:

COPY INTO dbo.[FactOnlineSales] FROM ’https://contoso.blob.core.windows.net/Sales/’

Safe keeping for data with unmatched security

Azure has the most advanced security and privacy features in the market. These features are built into the fabric of Azure Synapse, such as automated threat detection and always-on data encryption. And for fine-grained access control businesses can ensure data stays safe and private using column-level security, native row-level security, and dynamic data masking (now generally available) to automatically protect sensitive data in real time.

To further enhance security and privacy, we are introducing Azure Private Link. It provides a secure and scalable way to consume deployed resources from your own Azure Virtual Network (VNet). A secure connection is established using a consent-based call flow. Once established, all data that flows between Azure Synapse and service consumers is isolated from the internet and stays on the Microsoft network. There is no longer a need for gateways, network addresses translation (NAT) devices, or public IP addresses to communicate with the service.

An image showing Azure Private Link—a secure and scalable way to consume deployed resources from your own Azure Virtual Network (VNet).

Get started today

Businesses can continue running their existing data warehouse workloads in production today with generally available features on Azure Synapse.


Azure. Invent with purpose.


    10 user experience updates to the Azure portal

    $
    0
    0

    We’re constantly working to improve your user experience in the Azure portal. Our goal is to offer you a productive and easy-to-use single-pane-of glass where you can build, manage, and monitor your Azure services, applications, and infrastructure. In this post, I’d like to share the highlights of our latest experience improvements, including:

    Improved portal home experience

    We have improved the Azure portal home page to increase focus and clarity and to make things that are important to you easily accessible.

    Image of the simplified Azure portal home.
      Figure 1 – simplified Azure portal home.

    We’ve organized these into differentiated sections for ease of use:

    • Services and resources (dynamic): the top section has dynamic content that gets adjusted based on your usage without requiring any additional customizations. The more you use the portal, the more it adjusts to you!
    • Common entry points and useful info (static): the lower section contains static content with common entry points to provide quick access to main navigation flows that are always there, enabling users to develop muscle memory for repeated usage.

    Screenshot showing the new sections of the Azure home page.

    Figure 2 – sections of the home page.

    The Azure services section provides quick access to the Azure Marketplace, a list of eight of the most-used Azure services, and access to browse the entire Azure offering. The list of services is populated by default with some of our most popular services and gets automatically updated with your most recently used services. The Recent resources section shows a list of your recently used resources. Both lists get updated as you use the product. Our goal is to bring relevant services and instances front and center without requiring customization. The more you use the product, the more useful it gets for you! The rest of the sections are static, providing important points of reference for navigation and access to key Azure products, services, content, and training.

    The overall home experience has been streamlined by hiding the left navigation bar under an always present menu button in the top navigation bar:

    A screenshot pointing out the menu button in the Azure portal.

    Figure 3 – The menu button

    The main motivation for this change is improving focus, reducing distractions and redundancy, and to enable more immersive experiences. Before this change, when you were immersed in a workload in the portal you always had two vertical menus side by side, the left navigation bar and the menu for the experience. The left navigation bar is still available with all its functionality, including favorites, through the menu button at the top bar, always only one click away.

    An image comparing the new and old left navigation bars.

    Figure 4 – The new experience allows for more focus.

    If you prefer the old visual, having the left navigation always present, you can always bring it back using the Portal Settings panel.

    New service cards

    We have added hover cards associated with each service that show contextual information and provide direct access to some of the most common workflows. These hover cards are displayed after the cursor is placed for about a second on a service tile. We used the same interaction pattern and design than Outlook uses for identities (users and groups) that are well established with our customer base.

    A gif of the Azure services page and the Virtual machines hover card.

    Figure 5 – hover card for virtual machines.

    The cards expose relevant contextual information and actions for a service, including:

    • Create an instance: this provides quick access to a very common flow, short circuiting going though intermediate screens to launch the creation.
    • Browse instances: browse the full list of instances of that service.
    • Recently used: the last three recently used instances of that service, providing direct contextual access.
    • Microsoft Learn content: specialized free training curated for that service. The curation has been done by the Microsoft Learn team based on usage data and customer feedback.
    • Links to documents: key documents to learn or use the product (quick starts, technical docs, pricing.)
    • Free offerings available: if the service has free options available, surface them.

    A screenshot showing the anatomy of the Virtual machines service card.

    Figure 6 – Anatomy of the card

    The cards help improve on multiple aspects including more efficient customer journeys, better discoverability, and contextualized information, all presented in the context of one service. The card also helps customers of all levels of expertise: While new customers can benefit from Microsoft Learn content and free offerings advanced customers have a faster path the create instances or access their recently used instances of that service.

    The card does not only show on the home page. It is available in every place we display a service like the left navigation bar, the all services list, as well as the Azure home page.

    Extended Microsoft Learn integration

    Microsoft Learn provides official high-quality free learning material for Microsoft technologies. In this portal update we have introduced several contextual integration points:

    • Service browsing: contextual integration at the service category level (compute, storage, web, etc.)
    • Service cards: contextual integration at the service level (virtual machine, Cosmos DB, etc.) available in Azure home page, left navigation, and service browsing experience.
    • Azure QuickStart center: integration of most popular trainings in the landing page
    • Azure home: direct access to the main Microsoft Learn entry point

    Moving forward, the Azure portal and Microsoft Learn integration will continue to grow, to help you improve your Azure journey!

    Enhanced service browsing experience

    Azure is big and gets bigger every day. Navigating through Azure’s offering in the portal can be intimidating and challenging due to the vast set of available services. To make this easier, we’ve made the following updates:

    • Improved global search: improved performance and functionality when searching for services in the global search box in the top bar of the portal. This improved search is also always present and available in your portal session.
    • Improved service browsing experience: improved the All services experience adding an overview category supporting progressive disclosure of services, reducing visual clutter, and adding contextual Microsoft Learn content.

    For service browsing, we introduced an overview category with the goal of progressively disclosing information.

    A screenshot showing the progressive disclosure of information.

    Figure 7 – progressive disclosure of information and better discoverability

    The new Overview category presents a list of 15 of Azure’s most popular services, curated Microsoft Learn training content, and access to key functionality like Azure QuickStart center and free offerings.

    If the service that you are looking for is not available on this screen you can use the service search functionality, at the top left, or you can browse through the different categories available, at the left of the screen. When displaying a category, we are now surfacing contextual and free Microsoft Learn content to assist you in your Azure learning journey.

    A screenshot of the service categories.

    Figure 8 – service category with contextual and free Microsoft Learn integration. The training offered in this category is contextual and related to databases in this case.

    Improved instance browsing experience

    The resource instances browsing experience, going through the list of instances and services is one of the most common entry points for customers using the portal. We are introducing an updated experience that leverages the power of Azure Resource Graph to provide improved performance, better filtering and sorting options, better grouping, and allows exporting your resource lists to a CSV file.

    A screenshot showcasing the improved resource browsing experience.

    Figure 9 – improved resource browsing experience

    As of this month, this experience will be available for more than 70 services and over the next few months it will be rolled out across the entire platform.

    Improved Azure Resource Graph experience

    The Azure Resource Graph Explorer available in the portal enables you to write queries and create dashboards using the full power of Azure Resource Graph. Here is a video that shows how to use Resource Graph to write queries and create an inventory dashboard for your Azure subscriptions.

    We have now introduced Azure Resource Graph Queries in the Azure portal as a new top-level resource. Basically, you can save any Kusto Query Language (KQL) query as a resource in your Azure subscription. Like any other resource you can share it with colleagues, set permissions, check activity logs, and tag it.

    A screenshot showing Azure Graph Queries.

    Figure 10 – Azure Graph Queries

    Automatic refresh in Azure Dashboards

    We have added automatic refresh to our Azure dashboards, allowing to automatically refresh your dashboards over several time intervals.

    A screenshot showing how to configure automatic refresh, and choose the time period.

    Figure 11 – Configuring automatic refresh

    Improved service icons

    We've updated all of the service icons in the Azure portal with a more consistent and modern look. All these icons have been designed together as a family to provide better visual consistency and reduce distractions.

    An image showing all of the improved icons for the Azure portal.

    Figure 12 Improved icons

    Simplified settings panel

    The settings panel has been simplified. The main reason for this change is that many customers could not find the “Language & region” settings in the previous design and were asking us for capabilities that were already available in the portal. This new design separates the general and the Language & region settings, the portal supports 18 languages and dozens of regional formats, which was a common source of confusion for many of our users.

    Screenshots showcasing the different portal settings tabs for General and language & region.

    Figure 13 – separation of general and localization settings

    New landing page for Azure Mobile application

    The Azure mobile app enables you to stay connected, informed, and in control of your Azure assets while on the go. The app is available for iOS and Android devices.

    We have added a brand-new landing screen to the Azure Mobile App that brings all important information together as soon as you open the application. The new Home experience is composed of multiple cards with support for:

    • Azure services
    • Recent resources
    • Latest alerts
    • Service Health
    • Resource groups
    • Favorites

    The home view is fully customizable, you can decide what sections to show and in which order to show them.

    An image showing the new home page in the Azure Mobile App.

    Figure 14 – new home in the Azure Mobile App

    If you have not tried the Azure Mobile app yet, make sure to try it out.

    Let us know what you think

    We’ve gone through a lot of new capabilities and still did not cover everything that is coming up in this release! The team is always hard at work focusing on improving the experience and is always eager to get your feedback and learn how can we make your experience better.


    Azure. Invent with purpose.

    Visual Studio Code October 2019

    Review Apps in Azure Pipelines

    $
    0
    0

    Git is an incredibly effective way to collaborate on application development. Developers collaborate in feature branches and Pull Requests(PRs). Developers submit PRs for review, and once the review is complete the code change is merged into the main branch. This process works well for static code diffs where newly committed code is reviewed to meet the team’s coding practices.

    You can extend this further by triggering a Continuous Integration process on each PR which will build the code, run different type of tests. This helps in ensuring that the new changes meet the quality standard, that there are no breaking changes etc.

    Unfortunately, this process is limited to team members who directly work on that specific part of the application and have context to provide reliable feedback. Additionally, when working with microservice applications, there is minimal visibility into the end-to-end impact on the application behavior when modifying an individual service. This is where the new Review Apps feature in Azure Pipelines can help.

    The new Review Apps feature in Azure Pipelines, currently available in public preview, works by deploying every pull request (PR) from your Git repository to a dynamically-created Environment resource. You and your team can see how the changes in the PR look, as well as work with other dependent services before they’re merged into the main branch and deployed to production. This significantly helps you to “shift left“ and improves developer/team productivity and application quality.

    You can use the “new Pipeline creation” feature to set up Review Apps in just two simple steps. We have added these features for Kubernetes first, and in the next few months will be work on adding support for other Azure services as well.

    Once you have configured review app you will have a working Pipeline YAML in your Git repository Now you can:

    • Create a new branch for changes you want to commit
    • Start a Pull Request (PR) so other team members can review the code changes
    • A Review App is created with latest changes in the pull request deployed to a dynamically created environment resource
    • For each commit to the same branch, changes are deployed again. The PR will have the details of deployments done
    • Once the PR has been reviewed and accepted, the feature branch is merged into master where it’s deployed to a staging environment
    • After been approved in staging, the changes that were merged into master are deployed to production

    The key is in constantly updating these dynamically-created review environment resources as you continue to work. With this in place, you get the preview of every branch pull request. This is perfect for complex changes, especially in a micro-services scenario, where static code review and/or unit testing is not enough.

    If you are using Azure Kubernetes Service (AKS) with Azure Dev Spaces and Azure Pipelines, you can easily test your PR code in the context of the broader application running in AKS. As a bonus, team members such as product managers and designers can become part of the review process during early stages of development.

    Please try out Review Apps and share your feedback via Twitter on @AzureDevOps, or using Developer Community.

    The post Review Apps in Azure Pipelines appeared first on Azure DevOps Blog.

    EA and Visual Studio’s Linux Support

    $
    0
    0

    EA is using Visual Studio’s cross-platform support to cross-compile on Windows and debug on Linux. The following post is written by Ben May, a Senior Software Engineer of Engineering Workflows at EA. Thanks Ben and EA for your partnership, and for helping us make Visual Studio the best IDE for C++ cross-platform development.

    At EA our Frostbite Engine has a Linux component used for our dedicated servers that service many of our most popular games.  When we saw that Microsoft was adding support for Linux in a workload in Visual Studio, this caught my interest!  At EA our game developers are used to a Windows environment for development so we thought that forcing them to develop in a Linux environment directly would be a difficult ask, so we decided to use clang and cross-compile from Windows and target Linux.  Initially we had wired this up ourselves using Visual Studio Makefile Projects which called make to build our source, and then used a variety of tools to copy binaries over ssh to Linux machines, then wrote tooling to startup gdbserver on the remote Linux machine to be able to debug from PC.  After the release of the Visual Studio Linux Workload, we found that Microsoft had basically wrapped up all of the tools/processes up nicely into a Visual Studio Workload we could ask our Developers to install and be able to debug directly in Visual Studio!  So far the integration with WSL and remote debugging the workload provides has been a success and has drastically cleaned up our tools/processes surrounding Linux debugging/development.  Our developers have been really happy with the improved experience.

    I will now explain in more detail what we actually do.

    Internal build setup

    Our internal build setup uses our own proprietary tool to take our own cross platform build format and generate many types of outputs (vcxproj/csproj/make etc.) When we decided to add Linux to our list of supported platforms, we decided that we would set up our primary workflow for our developers to be initiated from a Windows based PC with Visual Studio, since this is the environment that we use for almost all of our other platforms.  Another requirement was for our CI (Continuous Integration/Build Farm) to be able to validate that our code compiled on Linux without needing to setup Linux host based CI VMs or needing a remote Linux system to compile the code, since that would be much more expensive and complicated to manage and support.  These requirements basically led us to deciding to cross-compile our codebase on Windows directly using clang on PC.

    For our cross-compiler we use something called a “Canadian cross” compiler setup.  See toolchain types for more details on the types of cross-compile you can do, and a Wikipedia link for why its called “Canadian cross”.  The primary reason for it being a “Canadian cross” is that we have built the LLVM and GCC toolchains on a Linux machine and moved their pieces to be used on a Windows machine combined with Windows clang.  Based on that our cross-compiler setup on Windows has the following in it:

    1. We use LLVM
    2. We combine the Windows version of LLVM with the Linux one on the Windows machine.  This is to get all of the libs/headers required for targeting Linux.
    3. We also use the GCC toolchain with LLVM.  In order to build the gcc tools for Windows we use crosstool-NG on a Linux host to build it.
    4. Then when building you need to pass -target x86_64-pc-linux-gnu and -sysroot=<path to gcc cross tools>
    5. You may need to initially use -Wno-nonportable-include-path warning suppression since Windows is not case-sensitive, and fixing all of the include path errors might be a bit of a lengthy task (although I recommend doing it!)

    After we have assembled our toolchain, we then use our proprietary generator to generate makefiles that build our code for us but referencing the above cross-compiler setup, and then a set of vcxproj files which are of type “Linux Makefile” and .sln file.  It is at this point where we move into Visual Studio for integration of our workflows into the IDE using the Visual Studio Linux Workload.

    Visual Studio integration

    Developers need to ensure they have the ‘Linux development with C++Workload installed:

    The Linux development with C++ workload in the Visual Studio installer.

    After ensuring the correct components are installed, we use the built in features of the Linux Makefile projects for working.  To build code we simply select Build from within Visual Studio, this executes our cross-compiler and outputs binaries.  Built into the Visual Studio Linux Projects is the ability to deploy and debug on a Linux host. 

    Building on WSL

    We can configure our generator to use 2 different deployment/debugging setups:

    1. WSL (Windows Subsystem For Linux)
    2. Remote Linux Host

    The most convenient setup is WSL assuming you do not have to render anything to screen.  In other words, if it’s only headless unit tests or console applications you need to develop this is the easiest and fastest way to iterate.

    If the developer is using WSL, then the binaries do not actually need to be deployed since WSL can access the binaries directly from the current Windows machine, this saves time since they no longer need to be copied/deployed to a remote machine (some of our binaries can get quite large so can sometimes add several seconds to an incremental build + debug session)

    Here is an example of me building EASTL, an open-source library of EAs, using Visual Studio Linux Makefile Projects and our cross-compiler:

    Debugging EASTL using Visual Studio Linux Makefile Projects.

    You can see I’ve placed a breakpoint there, and I configured my environment to use WSL when running, so when I debug it will launch the test binary in WSL and connect the debugger using Visual Studio’s gdb debugger without needing to first copy the binary.  This achieved by setting the Remote Build Root, Project and Deploy directories with the path being the WSL path to the same folder on my Windows machine.

    The "General" configuration properties for a Linux Makefile Project, where "Remote Build Root Directory" is a mounted WSL path.

    Here is a quick example of me debugging and hitting a breakpoint, then continuing to run and finish the unit test:

    EA debugging a unit test on the Windows Subsystem for Linux.

    Building on a remote Linux system

    For a Remote Linux machine setup where we do not use WSL, the only additional thing we need to worry about is the deployment of the built executable and its dependent dynamic libraries or content files.  The way we do this is we setup a source file to remote mapping, and have Visual Studio’s post build event do the copying of the file.  Inside the properties of the exe’s Project under “Build Events” -> “Post-Build Event” -> “Additional Files To Copy, we specify the list of files needed to be copied to the remote machine after the build completes.  This is needed to happen so that when we click “Debug” the binaries are already there on the Remote Machine.

    Setting up a remote post-build event with additional files to copy in Visual Studio's Property Pages.

    You can see that the syntax is a mapping of local path to remote path, which is quite handy for mapping files between the 2 file systems.

    Asks for the future

    One downside to this is that the deployment is done during the “Build” phase, what we would ideally like is to have 3 distinct phases when working with Linux:

    1. Build
    2. Deploy
    3. Debug

    This would be so that you could build the code without needing a connection to a remote machine, this is useful in CI environments, or in environments where someone just wants to locally build and fix compile issues for Linux and submit them and let automated testing validate their fixes.  Having Deploy and Debug distinct phases is also nice so that you could deploy from visual studio, but then potentially invoke/debug etc. directly from the Linux Machine.

    It is also worth noting at this point that we are still using make “under the hood” to execute our builds for Linux, but the Visual Studio Linux Workload also supports a full MSBuild-based Linux Project.  We have not spent much time trying that out at this point, but it would be nice if we could use that, in an effort to be using MSBuild to build Linux just like we do for most of our other platforms.

    We have been working closely with the Visual Studio team on the Linux Component, and have been following the Visual Studio 2019 Preview builds very closely to test and iterate on these workflows with them, our hope is that in future releases we will be able to:

    1. Fully separate Build from Deployment and Debug for local cross-compilation scenarios.
    2. Setup “incremental” build + deployment detection in the Linux Makefile Projects so that we don’t need to respawn make for all projects in our Solutions (some of our large solutions have > 500 projects).  This is mainly for a faster incremental iteration times.
    3. We have asked for direct WSL debugging be added to the Linux Makefile Projects, currently in our setup since the Linux Makefile Projects don’t support WSL directly we still need to debug over an ssh connection to wsl which means we have to have WSL running with sshd on it. This support is already integrated with MSBuild-based Linux Applications and CMake projects, but not yet for Makefile projects.
    4. Try the MSBuild-based Linux Project files and work with Microsoft to get those to potentially operate with a local toolchain (our cross-compiler) but still yield the same features for Deployment and Debug.  This would also help us solve the Makefile incremental problem mentioned above.

    All in all this workflow is very slick for us!  It allows our developers to use an IDE and Operating System they are comfortable working in, but are still able to build and debug Linux applications!

    -Ben May, Senior Software Engineer, Engineering Workflows at EA

    Thanks again for your partnership, Ben! Our team looks forward to continuing to improve the product based on feedback we receive from the community. If you’re interested in building the same project on both Windows and Linux, check out our native support for CMake. You can check out a similar story written by the MySQL Server Team at Oracle.

    The post EA and Visual Studio’s Linux Support appeared first on C++ Team Blog.

    Windows 10 SDK Preview Build 19013 available now!

    $
    0
    0

    Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 19013 or greater). The Preview SDK Build 19013 contains bug fixes and under development changes to the API surface area.

    The Preview SDK can be downloaded from developer section on Windows Insider.

    For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

    Things to note:

    • This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the Microsoft Store.
    • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
    • This build of the Windows SDK will install on only on Windows 10 Insider Preview builds.
    • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following static URL: https://software-download.microsoft.com/download/sg/Windows_InsiderPreview_SDK_en-us_19013_1.iso.

    Tools Updates

    Message Compiler (mc.exe)

    • Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. Otherwise, if it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. If the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
    • Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).

    Windows Trace Preprocessor (tracewpp.exe)

    • Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
    • Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
    • Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.

    TraceLoggingProvider.h

    • Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
    • In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.

    Signing your apps with Device Guard Signing

    Windows SDK Flight NuGet Feed

    We have stood up a NuGet feed for the flighted builds of the SDK. You can now test preliminary builds of the Windows 10 WinRT API Pack, as well as a microsoft.windows.sdk.headless.contracts NuGet package.

    We use the following feed to flight our NuGet packages.

    Microsoft.Windows.SDK.Contracts which can be used with to add the latest Windows Runtime APIs support to your .NET Framework 4.5+ and .NET Core 3.0+ libraries and apps.

    The Windows 10 WinRT API Pack enables you to add the latest Windows Runtime APIs support to your .NET Framework 4.5+ and .NET Core 3.0+ libraries and apps.

    Microsoft.Windows.SDK.Headless.Contracts provides a subset of the Windows Runtime APIs for console apps excludes the APIs associated with a graphical user interface. This NuGet is used in conjunction with

    Windows ML container development. Check out the Getting Started guide for more information.

    Breaking Changes

    Removal of api-ms-win-net-isolation-l1-1-0.lib

    In this release api-ms-win-net-isolation-l1-1-0.lib has been removed from the Windows SDK. Apps that were linking against api-ms-win-net-isolation-l1-1-0.lib can switch to OneCoreUAP.lib as a replacement.

    Removal of IRPROPS.LIB

    In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.

    Removal of WUAPICommon.H and WUAPICommon.IDL

    In this release we have moved ENUM tagServerSelection from WUAPICommon.H to wupai.h and removed the header. If you would like to use the ENUM tagServerSelection, you will need to include wuapi.h or wuapi.idl.

    API Updates, Additions and Removals

    The following APIs have been added to the platform since the release of Windows 10 SDK, version 1903, build 18362.

    Additions:

     
    
    namespace Windows.AI.MachineLearning {
      public sealed class LearningModelSessionOptions {
        bool CloseModelOnSessionCreation { get; set; }
      }
    }
    namespace Windows.ApplicationModel {
      public sealed class AppInfo {
        public static AppInfo Current { get; }
        Package Package { get; }
        public static AppInfo GetFromAppUserModelId(string appUserModelId);
        public static AppInfo GetFromAppUserModelIdForUser(User user, string appUserModelId);
      }
      public interface IAppInfoStatics
      public sealed class Package {
        StorageFolder EffectiveExternalLocation { get; }
        string EffectiveExternalPath { get; }
        string EffectivePath { get; }
        string InstalledPath { get; }
        bool IsStub { get; }
        StorageFolder MachineExternalLocation { get; }
        string MachineExternalPath { get; }
        string MutablePath { get; }
        StorageFolder UserExternalLocation { get; }
        string UserExternalPath { get; }
        IVectorView<AppListEntry> GetAppListEntries();
        RandomAccessStreamReference GetLogoAsRandomAccessStreamReference(Size size);
      }
    }
    namespace Windows.ApplicationModel.AppService {
      public enum AppServiceConnectionStatus {
        AuthenticationError = 8,
        DisabledByPolicy = 10,
        NetworkNotAvailable = 9,
        WebServiceUnavailable = 11,
      }
      public enum AppServiceResponseStatus {
        AppUnavailable = 6,
        AuthenticationError = 7,
        DisabledByPolicy = 9,
        NetworkNotAvailable = 8,
        WebServiceUnavailable = 10,
      }
      public enum StatelessAppServiceResponseStatus {
        AuthenticationError = 11,
        DisabledByPolicy = 13,
        NetworkNotAvailable = 12,
        WebServiceUnavailable = 14,
      }
    }
    namespace Windows.ApplicationModel.Background {
      public sealed class BackgroundTaskBuilder {
        void SetTaskEntryPointClsid(Guid TaskEntryPoint);
      }
      public sealed class BluetoothLEAdvertisementPublisherTrigger : IBackgroundTrigger {
        bool IncludeTransmitPowerLevel { get; set; }
        bool IsAnonymous { get; set; }
        IReference<short> PreferredTransmitPowerLevelInDBm { get; set; }
        bool UseExtendedFormat { get; set; }
      }
      public sealed class BluetoothLEAdvertisementWatcherTrigger : IBackgroundTrigger {
        bool AllowExtendedAdvertisements { get; set; }
      }
    }
    namespace Windows.ApplicationModel.ConversationalAgent {
      public sealed class ActivationSignalDetectionConfiguration
      public enum ActivationSignalDetectionTrainingDataFormat
      public sealed class ActivationSignalDetector
      public enum ActivationSignalDetectorKind
      public enum ActivationSignalDetectorPowerState
      public sealed class ConversationalAgentDetectorManager
      public sealed class DetectionConfigurationAvailabilityChangedEventArgs
      public enum DetectionConfigurationAvailabilityChangeKind
      public sealed class DetectionConfigurationAvailabilityInfo
      public enum DetectionConfigurationTrainingStatus
    }
    namespace Windows.ApplicationModel.DataTransfer {
      public sealed class DataPackage {
        event TypedEventHandler<DataPackage, object> ShareCanceled;
      }
    }
    namespace Windows.Devices.Bluetooth {
      public sealed class BluetoothAdapter {
        bool IsExtendedAdvertisingSupported { get; }
        uint MaxAdvertisementDataLength { get; }
      }
    }
    namespace Windows.Devices.Bluetooth.Advertisement {
      public sealed class BluetoothLEAdvertisementPublisher {
        bool IncludeTransmitPowerLevel { get; set; }
        bool IsAnonymous { get; set; }
        IReference<short> PreferredTransmitPowerLevelInDBm { get; set; }
        bool UseExtendedAdvertisement { get; set; }
      }
      public sealed class BluetoothLEAdvertisementPublisherStatusChangedEventArgs {
        IReference<short> SelectedTransmitPowerLevelInDBm { get; }
      }
      public sealed class BluetoothLEAdvertisementReceivedEventArgs {
        BluetoothAddressType BluetoothAddressType { get; }
        bool IsAnonymous { get; }
        bool IsConnectable { get; }
        bool IsDirected { get; }
        bool IsScannable { get; }
        bool IsScanResponse { get; }
        IReference<short> TransmitPowerLevelInDBm { get; }
      }
      public enum BluetoothLEAdvertisementType {
        Extended = 5,
      }
      public sealed class BluetoothLEAdvertisementWatcher {
        bool AllowExtendedAdvertisements { get; set; }
      }
      public enum BluetoothLEScanningMode {
        None = 2,
      }
    }
    namespace Windows.Devices.Bluetooth.Background {
      public sealed class BluetoothLEAdvertisementPublisherTriggerDetails {
        IReference<short> SelectedTransmitPowerLevelInDBm { get; }
      }
    }
    namespace Windows.Devices.Display {
      public sealed class DisplayMonitor {
        bool IsDolbyVisionSupportedInHdrMode { get; }
      }
    }
    namespace Windows.Devices.Input {
      public sealed class PenButtonListener
      public sealed class PenDockedEventArgs
      public sealed class PenDockListener
      public sealed class PenTailButtonClickedEventArgs
      public sealed class PenTailButtonDoubleClickedEventArgs
      public sealed class PenTailButtonLongPressedEventArgs
      public sealed class PenUndockedEventArgs
    }
    namespace Windows.Devices.Sensors {
      public sealed class Accelerometer {
        AccelerometerDataThreshold ReportThreshold { get; }
      }
      public sealed class AccelerometerDataThreshold
      public sealed class Barometer {
        BarometerDataThreshold ReportThreshold { get; }
      }
      public sealed class BarometerDataThreshold
      public sealed class Compass {
        CompassDataThreshold ReportThreshold { get; }
      }
      public sealed class CompassDataThreshold
      public sealed class Gyrometer {
        GyrometerDataThreshold ReportThreshold { get; }
      }
      public sealed class GyrometerDataThreshold
      public sealed class Inclinometer {
        InclinometerDataThreshold ReportThreshold { get; }
      }
      public sealed class InclinometerDataThreshold
      public sealed class LightSensor {
        LightSensorDataThreshold ReportThreshold { get; }
      }
      public sealed class LightSensorDataThreshold
      public sealed class Magnetometer {
        MagnetometerDataThreshold ReportThreshold { get; }
      }
      public sealed class MagnetometerDataThreshold
    }
    namespace Windows.Foundation.Metadata {
      public sealed class AttributeNameAttribute : Attribute
      public sealed class FastAbiAttribute : Attribute
      public sealed class NoExceptionAttribute : Attribute
    }
    namespace Windows.Globalization {
      public sealed class Language {
        string AbbreviatedName { get; }
        public static IVector<string> GetMuiCompatibleLanguageListFromLanguageTags(IIterable<string> languageTags);
      }
    }
    namespace Windows.Graphics.Capture {
      public sealed class GraphicsCaptureSession : IClosable {
        bool IsCursorCaptureEnabled { get; set; }
      }
    }
    namespace Windows.Graphics.DirectX {
      public enum DirectXPixelFormat {
        SamplerFeedbackMinMipOpaque = 189,
        SamplerFeedbackMipRegionUsedOpaque = 190,
      }
    }
    namespace Windows.Graphics.Holographic {
      public sealed class HolographicFrame {
        HolographicFrameId Id { get; }
      }
      public struct HolographicFrameId
      public sealed class HolographicFrameRenderingReport
      public sealed class HolographicFrameScanoutMonitor : IClosable
      public sealed class HolographicFrameScanoutReport
      public sealed class HolographicSpace {
        HolographicFrameScanoutMonitor CreateFrameScanoutMonitor(uint maxQueuedReports);
      }
    }
    namespace Windows.Management.Deployment {
      public sealed class AddPackageOptions
      public enum DeploymentOptions : uint {
        StageInPlace = (uint)4194304,
      }
      public sealed class PackageManager {
        IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> AddPackageByUriAsync(Uri packageUri, AddPackageOptions options);
        IVector<Package> FindProvisionedPackages();
        PackageStubPreference GetPackageStubPreference(string packageFamilyName);
        IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackageByUriAsync(Uri manifestUri, RegisterPackageOptions options);
        IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackagesByFullNameAsync(IIterable<string> packageFullNames, RegisterPackageOptions options);
        void SetPackageStubPreference(string packageFamilyName, PackageStubPreference useStub);
        IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> StagePackageByUriAsync(Uri packageUri, StagePackageOptions options);
      }
      public enum PackageStubPreference
      public enum PackageTypes : uint {
        All = (uint)4294967295,
      }
      public sealed class RegisterPackageOptions
      public enum RemovalOptions : uint {
        PreserveRoamableApplicationData = (uint)128,
      }
      public sealed class StagePackageOptions
      public enum StubPackageOption
    }
    namespace Windows.Media.Audio {
      public sealed class AudioPlaybackConnection : IClosable
      public sealed class AudioPlaybackConnectionOpenResult
      public enum AudioPlaybackConnectionOpenResultStatus
      public enum AudioPlaybackConnectionState
    }
    namespace Windows.Media.Capture {
      public sealed class MediaCapture : IClosable {
        MediaCaptureRelativePanelWatcher CreateRelativePanelWatcher(StreamingCaptureMode captureMode, DisplayRegion displayRegion);
      }
      public sealed class MediaCaptureInitializationSettings {
        Uri DeviceUri { get; set; }
        PasswordCredential DeviceUriPasswordCredential { get; set; }
      }
     public sealed class MediaCaptureRelativePanelWatcher : IClosable
    }
    namespace Windows.Media.Capture.Frames {
      public sealed class MediaFrameSourceInfo {
        Panel GetRelativePanel(DisplayRegion displayRegion);
      }
    }
    namespace Windows.Media.Devices {
      public sealed class PanelBasedOptimizationControl
      public sealed class VideoDeviceController : IMediaDeviceController {
        PanelBasedOptimizationControl PanelBasedOptimizationControl { get; }
      }
    }
    namespace Windows.Media.MediaProperties {
      public static class MediaEncodingSubtypes {
        public static string Pgs { get; }
        public static string Srt { get; }
        public static string Ssa { get; }
        public static string VobSub { get; }
      }
      public sealed class TimedMetadataEncodingProperties : IMediaEncodingProperties {
        public static TimedMetadataEncodingProperties CreatePgs();
        public static TimedMetadataEncodingProperties CreateSrt();
        public static TimedMetadataEncodingProperties CreateSsa(byte[] formatUserData);
        public static TimedMetadataEncodingProperties CreateVobSub(byte[] formatUserData);
      }
    }
    namespace Windows.Networking.BackgroundTransfer {
      public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
        void RemoveRequestHeader(string headerName);
        void SetRequestHeader(string headerName, string headerValue);
      }
      public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
        void RemoveRequestHeader(string headerName);
        void SetRequestHeader(string headerName, string headerValue);
      }
    }
    namespace Windows.Networking.Connectivity {
      public enum NetworkAuthenticationType {
        Owe = 12,
      }
    }
    namespace Windows.Networking.NetworkOperators {
      public sealed class NetworkOperatorTetheringAccessPointConfiguration {
        TetheringWiFiBand Band { get; set; }
        bool IsBandSupported(TetheringWiFiBand band);
        IAsyncOperation<bool> IsBandSupportedAsync(TetheringWiFiBand band);
      }
      public sealed class NetworkOperatorTetheringManager {
        public static void DisableNoConnectionsTimeout();
        public static IAsyncAction DisableNoConnectionsTimeoutAsync();
        public static void EnableNoConnectionsTimeout();
        public static IAsyncAction EnableNoConnectionsTimeoutAsync();
        public static bool IsNoConnectionsTimeoutEnabled();
      }
      public enum TetheringWiFiBand
    }
    namespace Windows.Networking.PushNotifications {
      public static class PushNotificationChannelManager {
        public static event EventHandler<PushNotificationChannelsRevokedEventArgs> ChannelsRevoked;
      }
      public sealed class PushNotificationChannelsRevokedEventArgs
      public sealed class RawNotification {
        IBuffer ContentBytes { get; }
      }
    }
    namespace Windows.Security.Authentication.Web.Core {
      public sealed class WebAccountMonitor {
        event TypedEventHandler<WebAccountMonitor, WebAccountEventArgs> AccountPictureUpdated;
      }
    }
    namespace Windows.Security.Isolation {
      public sealed class IsolatedWindowsEnvironment
      public enum IsolatedWindowsEnvironmentActivator
      public enum IsolatedWindowsEnvironmentAllowedClipboardFormats : uint
      public enum IsolatedWindowsEnvironmentAvailablePrinters : uint
      public enum IsolatedWindowsEnvironmentClipboardCopyPasteDirections : uint
      public struct IsolatedWindowsEnvironmentContract
      public struct IsolatedWindowsEnvironmentCreateProgress
      public sealed class IsolatedWindowsEnvironmentCreateResult
      public enum IsolatedWindowsEnvironmentCreateStatus
      public sealed class IsolatedWindowsEnvironmentFile
      public static class IsolatedWindowsEnvironmentHost
      public enum IsolatedWindowsEnvironmentHostError
      public sealed class IsolatedWindowsEnvironmentLaunchFileResult
      public enum IsolatedWindowsEnvironmentLaunchFileStatus
      public sealed class IsolatedWindowsEnvironmentOptions
      public static class IsolatedWindowsEnvironmentOwnerRegistration
      public sealed class IsolatedWindowsEnvironmentOwnerRegistrationData
      public sealed class IsolatedWindowsEnvironmentOwnerRegistrationResult
      public enum IsolatedWindowsEnvironmentOwnerRegistrationStatus
      public sealed class IsolatedWindowsEnvironmentProcess
      public enum IsolatedWindowsEnvironmentProcessState
      public enum IsolatedWindowsEnvironmentProgressState
      public sealed class IsolatedWindowsEnvironmentShareFolderRequestOptions
      public sealed class IsolatedWindowsEnvironmentShareFolderResult
      public enum IsolatedWindowsEnvironmentShareFolderStatus
      public sealed class IsolatedWindowsEnvironmentStartProcessResult
      public enum IsolatedWindowsEnvironmentStartProcessStatus
      public sealed class IsolatedWindowsEnvironmentTelemetryParameters
      public static class IsolatedWindowsHostMessenger
      public delegate void MessageReceivedCallback(Guid receiverId, IVectorView<object> message);
    }
    namespace Windows.Storage {
      public static class KnownFolders {
        public static IAsyncOperation<StorageFolder> GetFolderAsync(KnownFolderId folderId);
        public static IAsyncOperation<KnownFoldersAccessStatus> RequestAccessAsync(KnownFolderId folderId);
        public static IAsyncOperation<KnownFoldersAccessStatus> RequestAccessForUserAsync(User user, KnownFolderId folderId);
      }
      public enum KnownFoldersAccessStatus
      public sealed class StorageFile : IInputStreamReference, IRandomAccessStreamReference, IStorageFile, IStorageFile2, IStorageFilePropertiesWithAvailability, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
        public static IAsyncOperation<StorageFile> GetFileFromPathForUserAsync(User user, string path);
      }
      public sealed class StorageFolder : IStorageFolder, IStorageFolder2, IStorageFolderQueryOperations, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
        public static IAsyncOperation<StorageFolder> GetFolderFromPathForUserAsync(User user, string path);
      }
    }
    namespace Windows.Storage.Provider {
      public sealed class StorageProviderFileTypeInfo
      public sealed class StorageProviderSyncRootInfo {
        IVector<StorageProviderFileTypeInfo> FallbackFileTypeInfo { get; }
      }
      public static class StorageProviderSyncRootManager {
        public static bool IsSupported();
      }
    }
    namespace Windows.System {
      public sealed class UserChangedEventArgs {
        IVectorView<UserWatcherUpdateKind> ChangedPropertyKinds { get; }
      }
      public enum UserWatcherUpdateKind
    }
    namespace Windows.UI.Composition.Interactions {
      public sealed class InteractionTracker : CompositionObject {
        int TryUpdatePosition(Vector3 value, InteractionTrackerClampingOption option, InteractionTrackerPositionUpdateOption posUpdateOption);
      }
      public enum InteractionTrackerPositionUpdateOption
    }
    namespace Windows.UI.Input {
      public sealed class CrossSlidingEventArgs {
        uint ContactCount { get; }
      }
      public sealed class DraggingEventArgs {
        uint ContactCount { get; }
      }
      public sealed class GestureRecognizer {
        uint HoldMaxContactCount { get; set; }
        uint HoldMinContactCount { get; set; }
        float HoldRadius { get; set; }
        TimeSpan HoldStartDelay { get; set; }
        uint TapMaxContactCount { get; set; }
        uint TapMinContactCount { get; set; }
        uint TranslationMaxContactCount { get; set; }
        uint TranslationMinContactCount { get; set; }
      }
      public sealed class HoldingEventArgs {
        uint ContactCount { get; }
        uint CurrentContactCount { get; }
      }
      public sealed class ManipulationCompletedEventArgs {
        uint ContactCount { get; }
        uint CurrentContactCount { get; }
      }
      public sealed class ManipulationInertiaStartingEventArgs {
        uint ContactCount { get; }
      }
      public sealed class ManipulationStartedEventArgs {
        uint ContactCount { get; }
      }
      public sealed class ManipulationUpdatedEventArgs {
        uint ContactCount { get; }
        uint CurrentContactCount { get; }
      }
      public sealed class RightTappedEventArgs {
        uint ContactCount { get; }
      }
      public sealed class SystemButtonEventController : AttachableInputObject
      public sealed class SystemFunctionButtonEventArgs
      public sealed class SystemFunctionLockChangedEventArgs
      public sealed class SystemFunctionLockIndicatorChangedEventArgs
      public sealed class TappedEventArgs {
        uint ContactCount { get; }
      }
    }
    namespace Windows.UI.Input.Inking {
      public sealed class InkModelerAttributes {
        bool UseVelocityBasedPressure { get; set; }
      }
    }
    namespace Windows.UI.Text {
      public enum RichEditMathMode
      public sealed class RichEditTextDocument : ITextDocument {
        void GetMath(out string value);
        void SetMath(string value);
        void SetMathMode(RichEditMathMode mode);
      }
    }
    namespace Windows.UI.ViewManagement {
      public sealed class UISettings {
        event TypedEventHandler<UISettings, UISettingsAnimationsEnabledChangedEventArgs> AnimationsEnabledChanged;
        event TypedEventHandler<UISettings, UISettingsMessageDurationChangedEventArgs> MessageDurationChanged;
      }
      public sealed class UISettingsAnimationsEnabledChangedEventArgs
      public sealed class UISettingsMessageDurationChangedEventArgs
    }
    namespace Windows.UI.ViewManagement.Core {
      public sealed class CoreInputView {
        event TypedEventHandler<CoreInputView, CoreInputViewHidingEventArgs> PrimaryViewHiding;
        event TypedEventHandler<CoreInputView, CoreInputViewShowingEventArgs> PrimaryViewShowing;
      }
      public sealed class CoreInputViewHidingEventArgs
      public enum CoreInputViewKind {
        Symbols = 4,
      }
      public sealed class CoreInputViewShowingEventArgs
      public sealed class UISettingsController
    }
    
    

    The post Windows 10 SDK Preview Build 19013 available now! appeared first on Windows Developer Blog.

    Viewing all 5971 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>