Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

CMake Tools Extension for Visual Studio Code

$
0
0

Microsoft is now the primary maintainer of the CMake Tools extension for Visual Studio Code. The extension was created and previously maintained by vector-of-bool, who has moved on to other things. Thank you vector-of-bool for all of your hard work getting this extension to where it is today!

About the extension

The CMake Tools extension provides developers with a convenient and powerful workflow for configuring, building, browsing, and debugging CMake-based projects in Visual Studio Code. You can visit the CMake Tools documentation and the extension’s GitHub repository to get started and learn more.

The following screenshot of the extension shows a logical view of the open-source CMake project bullet3 organized by target (left) and several CMake-specific commands.

An image of the CMake Tools extension for VS Code, with a project outline to the left and several CMake-specific commands in the command palette.

We recommend using the CMake Tools extension alongside the C/C++ extension for Visual Studio Code for IntelliSense configuration and a full-fidelity C/C++ development experience.

Feedback is welcome

Download the CMake Tools extension for Visual Studio Code today and give it a try. If you run into issues or have suggestions for the team, please report them in the issues section of the extension’s GitHub repository. You can also reach the team via email (visualcpp@microsoft.com) and Twitter (@VisualC).

The post CMake Tools Extension for Visual Studio Code appeared first on C++ Team Blog.


Bing 2020 US Elections Experience (Beta)

$
0
0

Our goal with Bing is to provide quick and easy ways to find the information you need to make informed decisions from across sources and content. This election season we want to provide a single destination for the 2020 U.S. presidential race that delivers comprehensive information about candidates and issues. Today, we’re launching an expanded Bing elections experience in Beta. 

The 2020 U.S. presidential election is right around the corner and it can be difficult to find information on candidates and issues in one place. You might have to search across various news sources, candidate web sites, government sites or look through a voter’s pamphlet – piecing together information. 

To provide a single destination for the 2020 U.S. presidential race that helps users find comprehensive information about candidates and issues, we're sharing our expanded Bing elections experience in Beta. 


Throughout the 2020 election cycle, Bing aims to help people understand the issues at the heart of political discussions. Our goal is to provide a comprehensive view with the most relevant, accurate and timely information. This includes a holistic overview and introduction to key issues, with a range of news sources and opinions, and key legislation that impacts the issues. 

The candidate experience leverages data sources from news articles, official candidate sites and non-partisan partners such as VoteSmart.org. When searching for a particular presidential candidate, we present all of that information into a simple experience where people can find the latest news, upcoming events, and explore each candidate’s stance on the issues, in their own words, from various sources. The experience also provides a summary of each candidate’s voting record on congressional bills.



Today, the Beta experience begins with a focus on the U.S. presidential candidates and related issues. Over time, as we learn and hear feedback from customers, we will look to expand this experience to   .  

This work is part of a broader effort across Bing and Microsoft News to provide people with the most relevant and timely information to help you make decisions. 

We’ll continue to refine and improve this experience over time and we welcome your feedback as we continue on this journey.

-The Bing Search Team

Top Stories from the Microsoft DevOps Community – 2019.11.22

$
0
0

After all the recent travel, I finally got to spend this week at home and recharge. It was a much-needed break, and I got to enjoy Chicago, even though the winter decided to arrive early this year. So we can make a fresh cup of tea, and enjoy some community posts on code security and mobile development!

How to reuse your Azure DevOps pipeline between your projects
Code reuse has been a best practice for decades. But when we got into deployment automation, we seemed to forget how many issues can be caused by duplicating and maintaining the same implementation in multiple places. Can we reuse Azure Pipelines, and make sure that all of our future changes are applied across the board? Yes, and it gets easier with YAML! This article from Damien Aicheh shows us how to break down and reuse our Azure YAML pipeline across multiple projects, using an Android app as an example. Thank you, Damien!

Azure DevOps Settings for Xamarin iOS 13 and Android 10 Apps
Speaking of Android apps, Visual Studio 2019 recently got updated to support the recent versions of mobile development environments. Unfortunately, the update may have broken the hosted builds for some folks. This post from James Montemagno shows the updates needed in Azure DevOps to make sure your Xamarin builds are running successfully. Thank you, James!

99% of code isn’t yours
As mentioned earlier, code reuse helps us be more productive and less error-prone. Hence, it is mostly great news that we are, according to some reports, sharing the vast majority of our code today. This, however, means that we need to be extra careful about the packages we consume. In recent years, there’s been a ramp-up in supply chain attacks, when someone infiltrates your system through a third-party dependency, injecting malicious code into that dependency. This post from Jesse Houwing covers one of the potential ways to prevent such an attack in .NET projects. Thank you, Jesse!

Prevent “shadow-IT” Azure DevOps organizations
When you create a new Azure DevOps organization using your work email, it gets automatically tied to your Azure Active Directory (AAD). The benefit of this is that you can easily add your coworkers to the organization. The downside, however, is that large enterprises might not be aware of all the organizations created under their AAD. Read this post from Jasper Gilhuis to learn how you could set the policy to restrict permissions for creating new organizations using the company AAD.

Microsoft Security Code Analysis for Azure DevOps – Part 3 BinSkim
As security is top of mind for everyone, we recently released a new set of security tools for Azure DevOps called Microsoft Security Code Analysis. In this post, Gregor Suttie covers the tool called BinSkim, an open-source tool that validates the compiler/linker settings. Check out Gregor’s other posts in the series to learn about what else is in the toolkit!

If you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.11.22 appeared first on Azure DevOps Blog.

Programmatically change your system’s mic and speakers with NirCmd and Elgato StreamDeck

$
0
0

Elgato Stream DeckI've got a lot of different sound devices like USB Headphones, a formal Conference Room Speakerphone for conference calls, and 5.1 Surround Sound speakers, as well as different mics like a nice Shure XLR connected to a PV6 USB Audio Mixer, as well as the built in mics in my webcams and other devices.

There's lots of great audio apps and applets that can improve the audio switching situation on Windows. I like Audio Switcher and the similarly named https://audioswit.ch/er, for example.

You can also automatically change your audio inputs automatically depending on the app. So if you always want to record your podcast with Audacity you can tell Windows 10 to always set (lie) the audio ins and outs on an app by app basis. The app will never know the difference.

But I need to change audio a lot when I'm moving from Teams calls, recording Podcasts, and watching shows. I've got this Elgato Stream Deck that has buttons I can assign to anything. Combine the Stream Deck with the lovely NirCmd utility from NirSoft and I've got one click audio changes!

The icons are just PNGs and there's lots available online. I created a bunch of batch files (*.bat) with contents like this:

nircmdc setdefaultsounddevice "Speakers" 0

and

nircmdc setdefaultsounddevice "Headphones" 0  

The last number is 0, 1, or 2 where that means Console, Multimedia, or Communications. You can have one sound device for apps like Netflix and another for apps like Skype that identify as Communications. I just change all defaults, myself.

You can also add in commands like "setsubunitvolumedb" and others to have preset volumes and levels for line-ins. It's ideal for getting reliable results.

Elgato Stream Deck

Then just use the Stream Deck utility to assign the icon and batch file using the "System | Open" widget. Drag it over and assign and you're set! If you can't figure out what the names of your sound devices are, you can call nircmd showsoundevices.

It just took a few minutes to set this up and it'll save me a bunch of clicks every day.


Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!



© 2019 Scott Hanselman. All rights reserved.
     

Easily move WSL distributions between Windows 10 machines with import and export!

$
0
0

My colleague Tara and I were working on prepping a system for Azure IoT development and were using WSL2 on our respective machines. The scripts we were running were long-running and tedious and by the time they were done we basically had a totally customized perfect distro.

Rather than sharing our scripts and having folks run them for hours, we instead decided to export the distro and import it on n number of machines. That way Tara could set up the distro perfectly and then give it to me.

For example, when using PowerShell I can do this:

C:UsersScottDesktop> wsl --export PerfectWSLDistro ./PerfectWSLDistro.tar

Then I can share the resulting tar and give it to a friend and they can do this! (Note that I'm using ~ which is your home directory from PowerShell. If you're using cmd.exe you'll want to include the full path like c:usersscottAppdataLocalPerfectDistro)

mkdir ~/AppData/Local/PerfectDistro

wsl --import PerfectDistro ~/AppData/Local/PerfectDistro ./PerfectWSLDistro.tar --version 2

You can list our your WSL distros like this:

C:UsersScottDesktop> wsl --list -v

NAME STATE VERSION
* Ubuntu-18.04 Stopped 2
WLinux Stopped 2
Debian Stopped 1
PerfectDistro Stopped 2

It's surprisingly easy! Also, make sure you have the latest version of the Windows Terminal (and if you've got an old version and haven't deleted your profile.json, it's time to start fresh) it will automatically detect your WSL distros and make menu items for them!

Also be sure to check out my YouTube video on developing with WSL2!


Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!



© 2019 Scott Hanselman. All rights reserved.
     

Developing for the new category of dual-screen devices built for mobile productivity

$
0
0

Last month we shared our vision for dual-screen devices, designed to help people get more done on smaller and more mobile form factors. Today, we are going to share how developers can unlock this new era of mobile creativity. There are two stages to optimize for dual-screen devices:

1. Your websites and apps work

Surface Neo and Surface Duo side by side.

2. Embrace dual-screen experiences

Surface Duo and Surface Neo.

1) Your websites and apps work

Your code is important, and you will not have to start anew on these devices. Our goal is to make it as easy as possible for your existing websites and apps to work well on dual-screen devices.

Windows 10X is an expression of Windows 10 and will be available on dual-screen and foldable PCs, including the Surface Neo and devices from several partners. Developers will be able to use existing investments and tools for Web, UWP, and Win32 on these devices.

The Surface Duo will bring together Android apps, OS, and Surface hardware. Your current websites and Android apps will continue to work and run on a single screen. You can also stay in your current workflow and continue to use the same tools you do now.

Graphic showing devices and supported apps.

2) Embrace dual-screen experiences: introducing a common model

The excitement for this new device category creates a great opportunity for developers to innovate and reach new customers – enabling them to be more productive and engaged while on-the-go. We are in the process of identifying key postures and layouts across dual-screen and foldable PCs so that you can take advantage of both.

For native app developers, our goal is to develop a common model layered onto existing platform-specific tools and frameworks for Windows and Android. Of course, APIs to access this model will be tailored to the developer platform for each operating system. For example, you can use APIs to enhance your apps to use dual-screen capabilities and features like the 360-degree hinge.

Web will continue to follow the standards-based model. And we are committed to building the right web standards and APIs to allow web developers to take advantage of cross-platform dual-screen capabilities. Web developers can use the browser or web-based app model of their choosing to take advantage of these capabilities.

Early access

We are excited to start working with developers, and for those who want to adopt early please reach out to dualscreendev@microsoft.com to learn more. Thank you for your continued support and interest in this new device category. We cannot wait to share more details with developers in early 2020.

The post Developing for the new category of dual-screen devices built for mobile productivity appeared first on Windows Developer Blog.

Azure DevOps will no longer support Alternate Credentials authentication

$
0
0

We, the Azure DevOps team, work hard to ensure that your code is protected while enabling you to have friction free access. Until now, we’ve offered customers the ability to use Alternate Credentials in situations where they are connecting to Azure DevOps using legacy tools. While using Alternate Credentials was an easy way to set up authentication access to Azure DevOps, it is also less secure than other alternatives such as personal access tokens (PATs). As such, we believe the use of Alternate Credentials authentication represents a security risk to our customers because they never expire and can’t be scoped to limit access to the Azure DevOps data.

Security Changes

Azure DevOps will stop supporting Alternate Credentials authentication beginning March 2, 2020. The deprecation process will start by disabling and hiding this feature for organizations that are not using Alternate Credentials beginning December 9, 2019. Then starting March 2, 2020 we will gradually turn off this feature for the rest of the organizations, which means that individuals using Alternate Credentials have until then to transition to a more secure authentication method to avoid this breaking change impacting their DevOps workflows.

Will this impact you?

For each organization you belong to, in order to check if you have Alternate Credentials configured, go to the Azure DevOps portal. In the top right corner, open the User Settings menu User settings icon, then click on the Alternate Credentials menu item.

User settings menu

If you have Alternate Credentials configured in Azure DevOps, you will see it listed. In this case, you should move to another form of authentication by March 2, 2020. We recommend PATs. If you are using Alternate Credentials with Git (this is the most common usage scenario), then follow these instructions to set up Git with PATs.
If you see ‘Secondary Inactive’ or a message stating that Alternate Credentials were disabled for your organization, it means you don’t have Alternate Credentials set in Azure DevOps. There is no action item for you.

Deprecation Timeline

  • Beginning December 9, 2019 we will disable and hide Alternate Credentials settings for organizations that don’t have Alternate Credentials set. This change will be in effect for all these organizations by December 20, 2019.
  • In the coming months we will work with our customers that are still using the feature, to help them switch to another, more secure authentication method.
  • March 2, 2020 – Start gradually disabling Alternate Credentials for all Azure DevOps organizations.

Contact Us

If you have any questions, please open a developer community item with the tag [AltCreds] in the title. For faster service, please search for [AltCreds] in the developer community forum first, as your question might already be answered. You can reach out to us on Twitter at @AzureDevOps too.

FAQ

Q: As a user, what happens when Azure DevOps disables Alternate Credentials?
A: The tools that you use to connect to Azure DevOps using Alternate Credentials will stop working.

Q: As a user, how do I know in what scenario I am using Alternate Credentials in a specific organization?
A: We will email you the user agent (if we have it) and the identity that is using it, starting mid-December 2019.

Q: As a user, should I delete my Alternate Credentials for a specific organization?
A: You are not required to, but this is a way to test if anything is broken if you remove them. You can re-enable your Alternate Credentials after completing the test. Save the username and password somewhere before deleting it, just in case.

Q: As an administrator, how do I know if there are active users of Alternate Credentials in my organization?
A: We will email you this information, along with the user agents (if we have this information) and the identities that are using Alternate Credentials, starting mid-December 2019.

Q: As an administrator, should I turn off the alternate Credentials policy?
A: If you want to get this change faster, you can turn the policy off. Turning the policy off is reversible until December 8, 2019. After that, you won’t be able to turn the policy on from the portal. You would need to contact us to do that. (contact info above).

Q: Will this change apply to Azure DevOps Server?
A: No, because we already do not support Alternate Credentials in Azure DevOps Server.

The post Azure DevOps will no longer support Alternate Credentials authentication appeared first on Azure DevOps Blog.

Multi-language identification and transcription in Video Indexer

$
0
0

Multi-language speech transcription was recently introduced into Microsoft Video Indexer at the International Broadcasters Conference (IBC). It is available as a preview capability and customers can already start experiencing it in our portal. More details on all our IBC2019 enhancements can be found here.

Multi-language videos are common media assets in the globalization context, global political summits, economic forums, and sport press conferences are examples of venues where speakers use their native language to convey their own statements. Those videos pose a unique challenge for companies that need to provide automatic transcription for video archives of large volumes. Automatic transcription technologies expect users to explicitly determine the video language in advance to convert speech to text. This manual step becomes a scalability obstacle when transcribing multi-language content as one would have to manually tag audio segments with the appropriate language.

Microsoft Video Indexer provides a unique capability of automatic spoken language identification for multi-language content. This solution allows users to easily transcribe multi-language content without going through tedious manual preparation steps before triggering it. By that, it can save anyone with large archive of videos both time and money, and enable discoverability and accessibility scenarios.

Multi-language audio transcription in Video Indexer

The multi-language transcription capability is available as part of the Video Indexer portal. Currently, it supports four languages including English, French, German and Spanish, while expecting up to three different languages in an input media asset. While uploading a new media asset you can select the “Auto-detect multi-language” option as shown below.

1.	A new multi-language option available in the upload page of Video Indexer portal

Our application programming interface (API) supports this capability as well by enabling users to specify 'multi' as the language in the upload API. Once the indexing process is completed, the index JavaScript object notation (JSON) will include the underlying languages. Refer to our documentation for more details.

Additionally, each instance in the transcription section will include the language in which it was transcribed.

2.	A transcription snippet from Video Indexer timeline presenting different language segments

Customers can view the transcript and identified languages by time, jump to the specific places in the video for each language, and even see the multi-language transcription as video captions. The result transcription is also available as closed caption files (VTT, TTML, SRT, TXT, and CSV).

two languages

Methodology

Language identification from an audio signal is a complex task. Acoustic environment, speaker gender, and speaker age are among a variety of factors that affect this process. We represent audio signal using a visual representation, such as spectrograms, assuming that, different languages induce unique visual patterns which can be learned using deep neural networks.

Our solution has two main stages to determine the languages used in multi-language media content. First, it employs a deep neural network to classify audio segments with very high granularity, in other words, very few seconds. While a good model will successfully identify the underlying language, it can still miss-identify some segments due to similarities between languages. Therefore, we apply a second stage for examining these misses and smooth the results accordingly.

3.	A new insight pane showing the detected spoken languages and their exact occurrences on the timeline

Next steps

We introduced a differentiated capability for multi-language speech transcription. With this unique capability in Video Indexer, you can become more effective about the content of your videos as it allows you to immediately start searching across videos for different language segments. During the coming few months, we will be improving this capability by adding support for more languages and improving the model’s accuracy.

For more information, visit Video Indexer’s portal or the Video Indexer developer portal, and try this new capability. Read more about the new multi-language option and how to use it in our documentation.

Please use our UserVoice to share feedback and help us prioritize features or email visupport@microsoft.com with any questions.


A year of bringing AI to the edge

$
0
0

This post is co-authored by Anny Dow, Product Marketing Manager, Azure Cognitive Services.

In an age where low-latency and data security can be the lifeblood of an organization, containers make it possible for enterprises to meet these needs when harnessing artificial intelligence (AI).

Since introducing Azure Cognitive Services in containers this time last year, businesses across industries have unlocked new productivity gains and insights. The combination of both the most comprehensive set of domain-specific AI services in the market and containers enables enterprises to apply AI to more scenarios with Azure than with any other major cloud provider. Organizations ranging from healthcare to financial services have transformed their processes and customer experiences as a result.

 

These are some of the highlights from the past year:

Employing anomaly detection for predictive maintenance

Airbus Defense and Space, one of the world’s largest aerospace and defense companies, has tested Azure Cognitive Services in containers for developing a proof of concept in predictive maintenance. The company runs Anomaly Detector for immediately spotting unusual behavior in voltage levels to mitigate unexpected downtime. By employing advanced anomaly detection in containers without further burdening the data scientist team, Airbus can scale this critical capability across the business globally.

“Innovation has always been a driving force at Airbus. Using Anomaly Detector, an Azure Cognitive Service, we can solve some aircraft predictive maintenance use cases more easily.”  —Peter Weckesser, Digital Transformation Officer, Airbus

Automating data extraction for highly-regulated businesses

As enterprises grow, they begin to acquire thousands of hours of repetitive but critically important work every week. High-value domain specialists spend too much of their time on this. Today, innovative organizations use robotic process automation (RPA) to help manage, scale, and accelerate processes, and in doing so free people to create more value.

Automation Anywhere, a leader in robotic process automation, partners with these companies eager to streamline operations by applying AI. IQ Bot, their unique RPA software, automates data extraction from documents of various types. By deploying Cognitive Services in containers, Automation Anywhere can now handle documents on-premises and at the edge for highly regulated industries:

“Azure Cognitive Services in containers gives us the headroom to scale, both on-premises and in the cloud, especially for verticals such as insurance, finance, and health care where there are millions of documents to process.” —Prince Kohli, Chief Technology Officer for Products and Engineering, Automation Anywhere

For more about Automation Anywhere's partnership with Microsoft to democratize AI for organizations, check out this blog post.

Delighting customers and employees with an intelligent virtual agent

Lowell, one of the largest credit management services in Europe, wants credit to work better for everybody. So, it works hard to make every consumer interaction as painless as possible with the AI. Partnering with Crayon, a global leader in cloud services and solutions, Lowell set out to solve the outdated processes that kept the company’s highly trained credit counselors too busy with routine inquiries and created friction in the customer experience. Lowell turned to Cognitive Services to create an AI-enabled virtual agent that now handles 40 percent of all inquiries—making it easier for service agents to deliver greater value to consumers and better outcomes for Lowell clients.

With GDPR requirements, chatbots weren’t an option for many businesses before containers became available. Now companies like Lowell can ensure the data handling meets stringent compliance standards while running Cognitive Services in containers. As Carl Udvang, Product Manager at Lowell explains:

"By taking advantage of container support in Cognitive Services, we built a bot that safeguards consumer information, analyzes it, and compares it to case studies about defaulted payments to find the solutions that work for each individual."

One-to-one customer care at scale in data-sensitive environments has become easier to achieve.

Empowering disaster relief organizations on the ground

A few years ago, there was a major Ebola outbreak in Liberia. A team from USAID was sent to help mitigate the crisis. Their first task on the ground was to find and categorize the information such as the state of healthcare facilities, wifi networks, and population density centers.  They tracked this information manually and had to extract insights based on a complex corpus of data to determine the best course of action.

With the rugged versions of Azure Stack Edge, teams responding to such crises can carry a device running Cognitive Services in their backpack. They can upload unstructured data like maps, images, pictures of documents and then extract content, translate, draw relationships among entities, and apply a search layer. With these cloud AI capabilities available offline, at their fingertips, response teams can find the information they need in a matter of moments. In Satya’s Ignite 2019 keynote, Dean Paron, Partner Director of Azure Storage and Edge, walks us through how Cognitive Services in Azure Stack Edge can be applied in such disaster relief scenarios (starting at 27:07): 

Transforming customer support with call center analytics

Call centers are a critical customer touchpoint for many businesses, and being able to derive insights from customer calls is key to improving customer support. With Cognitive Services, businesses can transcribe calls with Speech to Text, analyze sentiment in real-time with Text Analytics, and develop a virtual agent to respond to questions with Text to Speech. However, in highly regulated industries, businesses are typically prohibited from running AI services in the cloud due to policies against uploading, processing, and storing any data in public cloud environments. This is especially true for financial institutions.

A leading bank in Europe addressed regulatory requirements and brought the latest transcription technology to their own on-premises environment by deploying Cognitive Services in containers. Through transcribing calls, customer service agents could not only get real-time feedback on customer sentiment and call effectiveness, but also batch process data to identify broad themes and unlock deeper insights on millions of hours of audio. Using containers also gave them flexibility to integrate with their own custom workflows and scale throughput at low latency.

What's next?

These stories touch on just a handful of the organizations leading innovation by bringing AI to where data lives. As running AI anywhere becomes more mainstream, the opportunities for empowering people and organizations will only be limited by the imagination.

Visit the container support page to get started with containers today.

For a deeper dive into these stories, visit the following

Multi-protocol access on Data Lake Storage now generally available

$
0
0

We are excited to announce the general availability of multi-protocol access for Azure Data Lake Storage. Azure Data Lake Storage is a unique cloud storage solution for analytics that offers multi-protocol access to the same data. This is a no-compromise solution that allows both the Azure Blob Storage API and Azure Data Lake Storage API to access data on a single storage account. You can store all your different types of data in one place, which gives you the flexibility to make the best use of your data as your use case evolves. The general availability of multi-protocol access creates the foundation to enable object storage capabilities on Data Lake Storage. This brings together the best of both object storage and Hadoop Distributed File System (HDFS) to enable scenarios that were not possible until today without data copy.

Multi-protocol access generally available

Broader ecosystem of applications and features

Multi-protocol access provides a powerful foundation to enable integrations and features for Data Lake Storage. Existing object storage applications and connectors can now be used to access data stored in Data Lake Storage with no changes. This vastly accelerated the integration of Azure services and the partner ecosystem with Data Lake Storage. We are also announcing the general availability of multiple Azure service integrations with Data Lake Storage including: Azure Stream Analytics, IoT Hub, Azure Event Hubs Capture, Azure Data Box, and Logic Apps. These Azure services now integrate seamlessly with Data Lake Storage. Real-time scenarios are now enabled by easily ingesting streaming data into Data Lake Storage via IoT Hub, Stream Analytics and Event Hubs Capture.

Ecosystem partners have also strongly leveraged multi-protocol access for their applications. Here is what our partners are saying:

“Multi-protocol access is a massive paradigm shift that enables cloud analytics to run on a single account for both blob data and analytics data. We believe that multi-protocol access helps customers rapidly achieve integration with Azure Data Lake Storage using our existing blob connector. This brings tremendous value to customers without needing to do costly re-development efforts.” - Rob Cornell, Head of Cloud Alliances, Talend

Our customers are excited about how their existing blob applications and workloads “just work” leveraging the multi-protocol capability. There are no changes required for their existing blob applications saving them precious development and validation resources. We have customers today running multiple workloads seamlessly against the same data using both the blob connector and the Azure Data Lake Storage connector.

We are also making the ability to tier data between hot and cool tiers for Data Lake Storage generally available. This is great for analytics customers who want to keep frequently used analytics data in the hot tier and move less used data to cooler storage tiers for cost efficiencies. As we continue our journey, we will be enabling more capabilities on Data Lake Storage in upcoming releases. Stay tuned for more announcements in the future!

Get started with multi-protocol access

Visit our multi-protocol access documentation to get started. For additional information see our preview announcement. To learn more about pricing, see our pricing page.

Customize Excel and track notes in Outlook—here’s what’s new to Microsoft 365 in November

Preview: Live transcription with Azure Media Services

$
0
0

Azure Media Services provides a platform with which you can broadcast live events. You can use our APIs to ingest, transcode, and dynamically package and encrypt your live video feeds for delivery via industry-standard protocols like HTTP Live Streaming (HLS) and MPEG-DASH. You can also use our APIs to integrate with CDNs and deliver to millions of concurrent viewers. Customers are using this platform for scenarios ranging from multi-day sporting events and entire seasons of professional sports, to webinars and town-hall meetings.

Live transcriptions is a new preview feature in our v3 APIs, wherein you can enhance the streams delivered to your viewers with machine-generated text that is transcribed from spoken words in the audio feed. This feature is an option you can enable for any type of Live Event that you create in our service, including pass-through Live Events, where you configure a live encoder upstream to generate and push a multiple bitrate live feed into the service (visualized in the diagram below).
   Schematic diagram for live transcription

Figure 1. Schematic diagram for live transcription

When a live contribution feed is sent to the service, it extracts the audio signal, decodes it, and calls to the Azure Cognitive Services speech-to-text APIs to get the speech transcribed. The resultant text is then packaged into formats that are suitable for delivery via streaming protocols. For HTTP Live Streaming (HLS) protocol with media packaged into MPEG Transport Stream (TS) fragments, the text is packaged into WebVTT fragments. For delivery via MPEG-DASH or HLS with CMAF protocols, the text is wrapped in IMSC1.1 compatible TTML, and then packaged into MPEG-4 Part 30 (ISO/IEC 14496-30) fragments.

You can use Azure Media Player (version 2.3.3 or newer) to play the video, as well as display the text on a wide variety of browsers and devices. You can also play back the streams on the iOS native player. If building an app for Android devices, playback of transcriptions has been verified by NexPlayer. You can contact them to request a demo.

Display of live transcription on Azure Media Player

Figure 2. Display of live transcription on Azure Media Player

For HTTP Live Streaming (HLS) protocol with media packaged into MPEG Transport Stream (TS) fragments, the text is packaged into WebVTT fragments. For delivery via MPEG-DASH or HLS with CMAF protocols, the text is wrapped in IMSC1.1 compatible TTML, and then packaged into MPEG-4 Part 30 (ISO/IEC 14496-30) fragments.

The live transcription feature is now available in preview in the West US 2 region. Read the full article here to learn how to get started with this preview feature.

Windows 10 SDK Preview Build 19028 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 19028 or greater). The Preview SDK Build 19028 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the Microsoft Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
  • This build of the Windows SDK will install on only on Windows 10 Insider Preview builds.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following static URL: https://software-download.microsoft.com/download/sg/Windows_InsiderPreview_SDK_en-us_19028_1.iso.

Tools Updates

Message Compiler (mc.exe)

  • Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. Otherwise, if it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. Otherwise, if the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
  • Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).

Windows Trace Preprocessor (tracewpp.exe)

  • Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
  • Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
  • Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.

TraceLoggingProvider.h

  • Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
  • In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.

Signing your apps with Device Guard Signing

Windows SDK Flight NuGet Feed

We have stood up a NuGet feed for the flighted builds of the SDK. You can now test preliminary builds of the Windows 10 WinRT API Pack, as well as a microsoft.windows.sdk.headless.contracts NuGet package.

We use the following feed to flight our NuGet packages.

Microsoft.Windows.SDK.Contracts which can be used with to add the latest Windows Runtime APIs support to your .NET Framework 4.5+ and .NET Core 3.0+ libraries and apps.

The Windows 10 WinRT API Pack enables you to add the latest Windows Runtime APIs support to your .NET Framework 4.5+ and .NET Core 3.0+ libraries and apps.

Microsoft.Windows.SDK.Headless.Contracts provides a subset of the Windows Runtime APIs for console apps excludes the APIs associated with a graphical user interface. This NuGet is used in conjunction with

Windows ML container development. Check out the Getting Started guide for more information.

Breaking Changes

Removal of api-ms-win-net-isolation-l1-1-0.lib

In this release api-ms-win-net-isolation-l1-1-0.lib has been removed from the Windows SDK. Apps that were linking against api-ms-win-net-isolation-l1-1-0.lib can switch to OneCoreUAP.lib as a replacement.

Removal of IRPROPS.LIB

In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.

Removal of WUAPICommon.H and WUAPICommon.IDL

In this release we have moved ENUM tagServerSelection from WUAPICommon.H to wupai.h and removed the header. If you would like to use the ENUM tagServerSelection, you will need to include wuapi.h or wuapi.idl.

API Updates, Additions and Removals

The following APIs have been added to the platform since the release of Windows 10 SDK, version 1903, build 18362.

Additions:

 

namespace Windows.AI.MachineLearning {
  public sealed class LearningModelSessionOptions {
    bool CloseModelOnSessionCreation { get; set; }
  }
}
namespace Windows.ApplicationModel {
  public sealed class AppInfo {
    public static AppInfo Current { get; }
    Package Package { get; }
    public static AppInfo GetFromAppUserModelId(string appUserModelId);
    public static AppInfo GetFromAppUserModelIdForUser(User user, string appUserModelId);
  }
  public interface IAppInfoStatics
  public sealed class Package {
    StorageFolder EffectiveExternalLocation { get; }
    string EffectiveExternalPath { get; }
    string EffectivePath { get; }
    string InstalledPath { get; }
    bool IsStub { get; }
    StorageFolder MachineExternalLocation { get; }
    string MachineExternalPath { get; }
    string MutablePath { get; }
    StorageFolder UserExternalLocation { get; }
    string UserExternalPath { get; }
    IVectorView<AppListEntry> GetAppListEntries();
    RandomAccessStreamReference GetLogoAsRandomAccessStreamReference(Size size);
  }
}
namespace Windows.ApplicationModel.AppService {
  public enum AppServiceConnectionStatus {
    AuthenticationError = 8,
    DisabledByPolicy = 10,
    NetworkNotAvailable = 9,
    WebServiceUnavailable = 11,
  }
  public enum AppServiceResponseStatus {
    AppUnavailable = 6,
    AuthenticationError = 7,
    DisabledByPolicy = 9,
    NetworkNotAvailable = 8,
    WebServiceUnavailable = 10,
  }
  public enum StatelessAppServiceResponseStatus {
    AuthenticationError = 11,
    DisabledByPolicy = 13,
    NetworkNotAvailable = 12,
    WebServiceUnavailable = 14,
  }
}
namespace Windows.ApplicationModel.Background {
  public sealed class BackgroundTaskBuilder {
    void SetTaskEntryPointClsid(Guid TaskEntryPoint);
  }
  public sealed class BluetoothLEAdvertisementPublisherTrigger : IBackgroundTrigger {
    bool IncludeTransmitPowerLevel { get; set; }
    bool IsAnonymous { get; set; }
    IReference<short> PreferredTransmitPowerLevelInDBm { get; set; }
    bool UseExtendedFormat { get; set; }
  }
  public sealed class BluetoothLEAdvertisementWatcherTrigger : IBackgroundTrigger {
    bool AllowExtendedAdvertisements { get; set; }
  }
}
namespace Windows.ApplicationModel.ConversationalAgent {
  public sealed class ActivationSignalDetectionConfiguration
  public enum ActivationSignalDetectionTrainingDataFormat
  public sealed class ActivationSignalDetector
  public enum ActivationSignalDetectorKind
  public enum ActivationSignalDetectorPowerState
  public sealed class ConversationalAgentDetectorManager
  public sealed class DetectionConfigurationAvailabilityChangedEventArgs
  public enum DetectionConfigurationAvailabilityChangeKind
  public sealed class DetectionConfigurationAvailabilityInfo
  public enum DetectionConfigurationTrainingStatus
}
namespace Windows.ApplicationModel.DataTransfer {
  public sealed class DataPackage {
    event TypedEventHandler<DataPackage, object> ShareCanceled;
  }
}
namespace Windows.Devices.Bluetooth {
  public sealed class BluetoothAdapter {
    bool IsExtendedAdvertisingSupported { get; }
    uint MaxAdvertisementDataLength { get; }
  }
}
namespace Windows.Devices.Bluetooth.Advertisement {
  public sealed class BluetoothLEAdvertisementPublisher {
    bool IncludeTransmitPowerLevel { get; set; }
    bool IsAnonymous { get; set; }
    IReference<short> PreferredTransmitPowerLevelInDBm { get; set; }
    bool UseExtendedAdvertisement { get; set; }
  }
  public sealed class BluetoothLEAdvertisementPublisherStatusChangedEventArgs {
    IReference<short> SelectedTransmitPowerLevelInDBm { get; }
  }
  public sealed class BluetoothLEAdvertisementReceivedEventArgs {
    BluetoothAddressType BluetoothAddressType { get; }
    bool IsAnonymous { get; }
    bool IsConnectable { get; }
    bool IsDirected { get; }
    bool IsScannable { get; }
    bool IsScanResponse { get; }
    IReference<short> TransmitPowerLevelInDBm { get; }
  }
  public enum BluetoothLEAdvertisementType {
    Extended = 5,
  }
  public sealed class BluetoothLEAdvertisementWatcher {
    bool AllowExtendedAdvertisements { get; set; }
  }
  public enum BluetoothLEScanningMode {
    None = 2,
  }
}
namespace Windows.Devices.Bluetooth.Background {
  public sealed class BluetoothLEAdvertisementPublisherTriggerDetails {
    IReference<short> SelectedTransmitPowerLevelInDBm { get; }
  }
}
namespace Windows.Devices.Display {
  public sealed class DisplayMonitor {
    bool IsDolbyVisionSupportedInHdrMode { get; }
  }
}
namespace Windows.Devices.Input {
  public sealed class PenButtonListener
  public sealed class PenDockedEventArgs
  public sealed class PenDockListener
  public sealed class PenTailButtonClickedEventArgs
  public sealed class PenTailButtonDoubleClickedEventArgs
  public sealed class PenTailButtonLongPressedEventArgs
  public sealed class PenUndockedEventArgs
}
namespace Windows.Devices.Sensors {
 public sealed class Accelerometer {
    AccelerometerDataThreshold ReportThreshold { get; }
  }
  public sealed class AccelerometerDataThreshold
  public sealed class Barometer {
    BarometerDataThreshold ReportThreshold { get; }
  }
  public sealed class BarometerDataThreshold
  public sealed class Compass {
    CompassDataThreshold ReportThreshold { get; }
  }
  public sealed class CompassDataThreshold
  public sealed class Gyrometer {
    GyrometerDataThreshold ReportThreshold { get; }
  }
  public sealed class GyrometerDataThreshold
  public sealed class Inclinometer {
    InclinometerDataThreshold ReportThreshold { get; }
  }
  public sealed class InclinometerDataThreshold
  public sealed class LightSensor {
    LightSensorDataThreshold ReportThreshold { get; }
  }
  public sealed class LightSensorDataThreshold
  public sealed class Magnetometer {
    MagnetometerDataThreshold ReportThreshold { get; }
  }
  public sealed class MagnetometerDataThreshold
}
namespace Windows.Foundation.Metadata {
  public sealed class AttributeNameAttribute : Attribute
  public sealed class FastAbiAttribute : Attribute
  public sealed class NoExceptionAttribute : Attribute
}
namespace Windows.Globalization {
  public sealed class Language {
    string AbbreviatedName { get; }
    public static IVector<string> GetMuiCompatibleLanguageListFromLanguageTags(IIterable<string> languageTags);
  }
}
namespace Windows.Graphics.Capture {
  public sealed class GraphicsCaptureSession : IClosable {
    bool IsCursorCaptureEnabled { get; set; }
  }
}
namespace Windows.Graphics.DirectX {
  public enum DirectXPixelFormat {
    SamplerFeedbackMinMipOpaque = 189,
    SamplerFeedbackMipRegionUsedOpaque = 190,
  }
}
namespace Windows.Graphics.Holographic {
  public sealed class HolographicFrame {
    HolographicFrameId Id { get; }
  }
  public struct HolographicFrameId
  public sealed class HolographicFrameRenderingReport
  public sealed class HolographicFrameScanoutMonitor : IClosable
  public sealed class HolographicFrameScanoutReport
  public sealed class HolographicSpace {
    HolographicFrameScanoutMonitor CreateFrameScanoutMonitor(uint maxQueuedReports);
  }
}
namespace Windows.Management.Deployment {
  public sealed class AddPackageOptions
  public enum DeploymentOptions : uint {
    StageInPlace = (uint)4194304,
  }
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> AddPackageByUriAsync(Uri packageUri, AddPackageOptions options);
    IVector<Package> FindProvisionedPackages();
    PackageStubPreference GetPackageStubPreference(string packageFamilyName);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackageByUriAsync(Uri manifestUri, RegisterPackageOptions options);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackagesByFullNameAsync(IIterable<string> packageFullNames, RegisterPackageOptions options);
    void SetPackageStubPreference(string packageFamilyName, PackageStubPreference useStub);
   IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> StagePackageByUriAsync(Uri packageUri, StagePackageOptions options);
  }
  public enum PackageStubPreference
  public enum PackageTypes : uint {
    All = (uint)4294967295,
  }
  public sealed class RegisterPackageOptions
  public enum RemovalOptions : uint {
    PreserveRoamableApplicationData = (uint)128,
  }
  public sealed class StagePackageOptions
  public enum StubPackageOption
}
namespace Windows.Media.Audio {
  public sealed class AudioPlaybackConnection : IClosable
  public sealed class AudioPlaybackConnectionOpenResult
  public enum AudioPlaybackConnectionOpenResultStatus
  public enum AudioPlaybackConnectionState
}
namespace Windows.Media.Capture {
  public sealed class MediaCapture : IClosable {
    MediaCaptureRelativePanelWatcher CreateRelativePanelWatcher(StreamingCaptureMode captureMode, DisplayRegion displayRegion);
  }
  public sealed class MediaCaptureInitializationSettings {
    Uri DeviceUri { get; set; }
    PasswordCredential DeviceUriPasswordCredential { get; set; }
  }
  public sealed class MediaCaptureRelativePanelWatcher : IClosable
}
namespace Windows.Media.Capture.Frames {
  public sealed class MediaFrameSourceInfo {
    Panel GetRelativePanel(DisplayRegion displayRegion);
  }
}
namespace Windows.Media.Devices {
  public sealed class PanelBasedOptimizationControl
  public sealed class VideoDeviceController : IMediaDeviceController {
    PanelBasedOptimizationControl PanelBasedOptimizationControl { get; }
 }
}
namespace Windows.Media.MediaProperties {
  public static class MediaEncodingSubtypes {
    public static string Pgs { get; }
    public static string Srt { get; }
    public static string Ssa { get; }
    public static string VobSub { get; }
  }
  public sealed class TimedMetadataEncodingProperties : IMediaEncodingProperties {
    public static TimedMetadataEncodingProperties CreatePgs();
    public static TimedMetadataEncodingProperties CreateSrt();
    public static TimedMetadataEncodingProperties CreateSsa(byte[] formatUserData);
    public static TimedMetadataEncodingProperties CreateVobSub(byte[] formatUserData);
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
  public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
}
namespace Windows.Networking.Connectivity {
  public enum NetworkAuthenticationType {
    Owe = 12,
  }
}
namespace Windows.Networking.NetworkOperators {
  public sealed class NetworkOperatorTetheringAccessPointConfiguration {
    TetheringWiFiBand Band { get; set; }
    bool IsBandSupported(TetheringWiFiBand band);
    IAsyncOperation<bool> IsBandSupportedAsync(TetheringWiFiBand band);
  }
  public sealed class NetworkOperatorTetheringManager {
    public static void DisableNoConnectionsTimeout();
    public static IAsyncAction DisableNoConnectionsTimeoutAsync();
    public static void EnableNoConnectionsTimeout();
    public static IAsyncAction EnableNoConnectionsTimeoutAsync();
    public static bool IsNoConnectionsTimeoutEnabled();
  }
  public enum TetheringWiFiBand
}
namespace Windows.Networking.PushNotifications {
  public static class PushNotificationChannelManager {
    public static event EventHandler<PushNotificationChannelsRevokedEventArgs> ChannelsRevoked;
  }
  public sealed class PushNotificationChannelsRevokedEventArgs
  public sealed class RawNotification {
    IBuffer ContentBytes { get; }
  }
}
namespace Windows.Security.Authentication.Web.Core {
  public sealed class WebAccountMonitor {
    event TypedEventHandler<WebAccountMonitor, WebAccountEventArgs> AccountPictureUpdated;
  }
}
namespace Windows.Security.Isolation {
  public sealed class IsolatedWindowsEnvironment
  public enum IsolatedWindowsEnvironmentActivator
  public enum IsolatedWindowsEnvironmentAllowedClipboardFormats : uint
  public enum IsolatedWindowsEnvironmentAvailablePrinters : uint
  public enum IsolatedWindowsEnvironmentClipboardCopyPasteDirections : uint
  public struct IsolatedWindowsEnvironmentContract
  public struct IsolatedWindowsEnvironmentCreateProgress
  public sealed class IsolatedWindowsEnvironmentCreateResult
  public enum IsolatedWindowsEnvironmentCreateStatus
  public sealed class IsolatedWindowsEnvironmentFile
  public static class IsolatedWindowsEnvironmentHost
  public enum IsolatedWindowsEnvironmentHostError
  public sealed class IsolatedWindowsEnvironmentLaunchFileResult
  public enum IsolatedWindowsEnvironmentLaunchFileStatus
  public sealed class IsolatedWindowsEnvironmentOptions
  public static class IsolatedWindowsEnvironmentOwnerRegistration
  public sealed class IsolatedWindowsEnvironmentOwnerRegistrationData
  public sealed class IsolatedWindowsEnvironmentOwnerRegistrationResult
  public enum IsolatedWindowsEnvironmentOwnerRegistrationStatus
  public sealed class IsolatedWindowsEnvironmentProcess
  public enum IsolatedWindowsEnvironmentProcessState
  public enum IsolatedWindowsEnvironmentProgressState
  public sealed class IsolatedWindowsEnvironmentShareFolderRequestOptions
  public sealed class IsolatedWindowsEnvironmentShareFolderResult
  public enum IsolatedWindowsEnvironmentShareFolderStatus
  public sealed class IsolatedWindowsEnvironmentStartProcessResult
  public enum IsolatedWindowsEnvironmentStartProcessStatus
  public sealed class IsolatedWindowsEnvironmentTelemetryParameters
  public static class IsolatedWindowsHostMessenger
  public delegate void MessageReceivedCallback(Guid receiverId, IVectorView<object> message);
}
namespace Windows.Storage {
  public static class KnownFolders {
    public static IAsyncOperation<StorageFolder> GetFolderAsync(KnownFolderId folderId);
    public static IAsyncOperation<KnownFoldersAccessStatus> RequestAccessAsync(KnownFolderId folderId);
    public static IAsyncOperation<KnownFoldersAccessStatus> RequestAccessForUserAsync(User user, KnownFolderId folderId);
  }
  public enum KnownFoldersAccessStatus
  public sealed class StorageFile : IInputStreamReference, IRandomAccessStreamReference, IStorageFile, IStorageFile2, IStorageFilePropertiesWithAvailability, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFile> GetFileFromPathForUserAsync(User user, string path);
  }
  public sealed class StorageFolder : IStorageFolder, IStorageFolder2, IStorageFolderQueryOperations, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFolder> GetFolderFromPathForUserAsync(User user, string path);
  }
}
namespace Windows.Storage.Provider {
  public sealed class StorageProviderFileTypeInfo
  public sealed class StorageProviderSyncRootInfo {
    IVector<StorageProviderFileTypeInfo> FallbackFileTypeInfo { get; }
  }
  public static class StorageProviderSyncRootManager {
    public static bool IsSupported();
  }
}
namespace Windows.System {
  public sealed class UserChangedEventArgs {
    IVectorView<UserWatcherUpdateKind> ChangedPropertyKinds { get; }
  }
  public enum UserWatcherUpdateKind
}
namespace Windows.UI.Composition.Interactions {
  public sealed class InteractionTracker : CompositionObject {
    int TryUpdatePosition(Vector3 value, InteractionTrackerClampingOption option, InteractionTrackerPositionUpdateOption posUpdateOption);
  }
  public enum InteractionTrackerPositionUpdateOption
}
namespace Windows.UI.Input {
  public sealed class CrossSlidingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class DraggingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class GestureRecognizer {
    uint HoldMaxContactCount { get; set; }
    uint HoldMinContactCount { get; set; }
    float HoldRadius { get; set; }
    TimeSpan HoldStartDelay { get; set; }
    uint TapMaxContactCount { get; set; }
    uint TapMinContactCount { get; set; }
    uint TranslationMaxContactCount { get; set; }
    uint TranslationMinContactCount { get; set; }
  }
  public sealed class HoldingEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationCompletedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationInertiaStartingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationStartedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationUpdatedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class RightTappedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class SystemButtonEventController : AttachableInputObject
  public sealed class SystemFunctionButtonEventArgs
  public sealed class SystemFunctionLockChangedEventArgs
  public sealed class SystemFunctionLockIndicatorChangedEventArgs
  public sealed class TappedEventArgs {
    uint ContactCount { get; }
  }
}
namespace Windows.UI.Input.Inking {
  public sealed class InkModelerAttributes {
    bool UseVelocityBasedPressure { get; set; }
  }
}
namespace Windows.UI.Text {
  public enum RichEditMathMode
  public sealed class RichEditTextDocument : ITextDocument {
    void GetMath(out string value);
    void SetMath(string value);
    void SetMathMode(RichEditMathMode mode);
  }
}
namespace Windows.UI.ViewManagement {
  public sealed class UISettings {
    event TypedEventHandler<UISettings, UISettingsAnimationsEnabledChangedEventArgs> AnimationsEnabledChanged;
    event TypedEventHandler<UISettings, UISettingsMessageDurationChangedEventArgs> MessageDurationChanged;
  }
  public sealed class UISettingsAnimationsEnabledChangedEventArgs
  public sealed class UISettingsMessageDurationChangedEventArgs
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    event TypedEventHandler<CoreInputView, CoreInputViewHidingEventArgs> PrimaryViewHiding;
    event TypedEventHandler<CoreInputView, CoreInputViewShowingEventArgs> PrimaryViewShowing;
  }
  public sealed class CoreInputViewHidingEventArgs
  public enum CoreInputViewKind {
    Symbols = 4,
  }
  public sealed class CoreInputViewShowingEventArgs
  public sealed class UISettingsController
}

The post Windows 10 SDK Preview Build 19028 available now! appeared first on Windows Developer Blog.

The open source Carter Community Project adds opinionated elegance to ASP.NET Core routing

$
0
0

imageI blogged about NancyFX 6 years ago and since then lots of ASP.NET open source frameworks that build upon - and improve! - web development on .NET have become popular.

There's more than one way to serve and angle bracket (or curly brace) my friends!

Jonathan Channon and the Carter Community (JC was a core Nancy contributor as well) have been making a thin layer of extension methods and conventions on top of ASP.NET Core to make URL routing "more elegant." Carter adds and formalizes a more opinionated framework and also adds direct support for the amazing FluentValidation.

One of the best things about ASP.NET Core is its extensibility model and Carter takes full advantage of that. Carter is ASP.NET.

You can add Carter to your existing ASP.NET Core app by just "dotnet add package carter" and adding it to your Startup.cs:

public class Startup

{
public void ConfigureServices(IServiceCollection services)
{
services.AddCarter();
}

public void Configure(IApplicationBuilder app)
{
app.UseRouting();
app.UseEndpoints(builder => builder.MapCarter());
}
}

At this point you can make a quick "microservice" - in this case just handle an HTTP GET - in almost no code, and it's super clear to read:

public class HomeModule : CarterModule

{
public HomeModule()
{
Get("/", async (req, res) => await res.WriteAsync("Hello from Carter!"));
}
}

Or you can add Carter as a template so you can later "dotnet new carter." Start by adding the Carter Template with "dotnet new -i CarterTemplate" and now you can make a new boilerplate starter app anytime.

There's a lot of great sample code on the Carter Community GitHub. Head over to https://github.com/CarterCommunity/Carter/tree/master/samples and give them more Stars!

Carter can also cleanly integrate with your existing ASP.NET apps because, again, it's extensions and improvements on top of ASP.NET. Now how you can add Carter to a ASP.NET Core app that's using Controllers in the MVC pattern just like this:

public void Configure(IApplicationBuilder app)

{
app.UseRouting();
app.UseEndpoints(builder =>
{
builder.MapDefaultControllerRoute();
builder.MapCarter();
});
}

Then easily handle a GET by returning a list of things as JSON like this:

this.Get<GetActors>("/actors", async (req, res) =>

{
var people = actorProvider.Get();
await res.AsJson(people);
});

 

Again, check out Carter on GitHub at and follow https://twitter.com/CarterLibs on Twitter!


Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!



© 2019 Scott Hanselman. All rights reserved.
     

AI, Machine Learning and Data Science Roundup: November 2019

$
0
0

A roundup of news about Artificial Intelligence, Machine Learning and Data Science. This is an eclectic collection of interesting blog posts, software announcements and data applications from Microsoft and elsewhere that I've noted recently.

Open Source AI, ML & Data Science News

Python 3.8 is now available. From now on, new versions of Python will be released on a 12-month cycle, in October of each year.

Python takes the #2 spot in Github's annual ranking of programming language popularity, displacing Java and behind JavaScript.

PyTorch 1.3 is now available, with improved performance, deployment to mobile devices, "Captum" model interpretability tools, and Cloud TPU support.

The Gradient documents the growing dominance of PyTorch, particularly in research.

Keras Tuner, hyperparameter optimization for Keras, is now available on PyPI.

ONNX, the open exchange format for deep learning models, is now a Linux Foundation project.

AI Inclusive, a newly-formed worldwide organization to promote diversity in the AI community.

Industry News

Databricks announces the MLflow Model Registry, to share and collaborate on machine learning models with MLflow.

Flyte, Lyft's cloud-native machine learning and data processing platform, has been released as open source.

RStudio introduces Package Manager, a commercial RStudio extension to help organizations manage binary R packages on Linux systems.

Exploratory, a new commercial tool for data science and data exploration, built on R.

GCP releases Explainable AI, a new tool to help humans understand how a machine learning model reaches its conclusions.

Google proposes Model Cards, a standardized way of sharing information about ML models, based on this paper.

GCP AutoML Translation is now generally available, and the GCP Translation API is now available in Basic and Advanced editions.

GCP Cloud AutoML is now integrated with the Kaggle data science competition platform.

Amazon Rekognition adds Custom Labels, allowing users to train the image classification service to recognize new objects with as few as 10 training images per label.

Amazon Sagemaker can now use hundreds of free and paid machine learning models offered in Amazon Marketplace.

The AWS Step Functions Data Science SDK, for building machine learning workflows in Python running on AWS infrastructure, is now available.

Microsoft News

Azure Machine Learning service has released several major updates, including:

Visual Studio Code adds several improvements for Python developers, including support for interacting with and editing Jupyter notebooks.

ONNX Runtime 1.0 is now generally available, for embedded inference of machine learning models in the open ONNX format.

Many new capabilities have been added to Cognitive Services, including:

Bot Framework SDK v4 is now available, and a new Bot Framework Composer has been released on Github for visual editing of conversation flows.

SandDance, Microsoft's interactive visual exploration tool, is now available as open source.

Learning resources

An essay about the root causes of problems with diversity in NLP models: for example, "hers" not being recognized as a pronoun. 

Videos from the Artificial Intelligence and Machine Learning Path, a series of six application-oriented talks presented at Microsoft Ignite.

A guide to getting started with PyTorch, using Google Colab's Free GPU offer.

Public weather and climate datasets, provided by Google.

Applications

The Relightables: capture humans in a custom light stage, drop video into a 3-D scene with realistic lighting.

How Tesla builds and deploys its driving automation models with PyTorch (presentation at PyTorch DevCon).

OpenAI has released the full GPT-2 language generation model.

Spleeter, a pre-trained PyTorch model to separate a music track into vocal and instrument audio files.

Detectron2, a PyTorch reimplementation of Facebook's popular object-detection and image-segmentation library.

Find previous editions of the AI roundup here.


Embracing nullable reference types

$
0
0

Probably the most impactful feature of C# 8.0 is Nullable Reference Types (NRTs). It lets you make the flow of nulls explicit in your code, and warns you when you don’t act according to intent.

The NRT feature holds you to a higher standard on how you deal with nulls, and as such it issues new warnings on existing code. So that those warnings (however useful) don’t break you, the feature must be explicitly enabled in your code before it starts complaining. Once you do that on existing code, you have work to do to make that code null-safe and satisfy the compiler that you did.

How should you think about when to do this work? That’s the main subject of this post, and we propose below that there’s a “nullable rollout phase” until .NET 5 ships (November 2020), wherein popular libraries should strive to embrace NRTs.

But first a quick primer.

Remind me – what is this feature again?

Up until now, in C# we allow references to be null, but we also allow them to be dereferenced without checks. This leads to what is by far the most common exception – the NullReferenceException – when nulls are accidentally dereferenced. An undesired null coming from one place in the code may lead to an exception being thrown later, from somewhere else that dereferences it. This makes null bugs hard to discover and annoying to fix. Can you spot the bug?:

static void M(string s) 
{ 
    Console.WriteLine(s.Length);
}
static void Main(string[] args)
{
    string s = (args.Length > 0) ? args[0] : null;
    M(s);
}

In C# 8.0 we want to help get rid of this problem by being stricter about nulls. This means we’re going to start complaining when values of ordinary reference types (string, object, IDisposable etc) are null. However, new warnings on existing code aren’t something we can just do, no matter how good it is for you! So NRT is an optional feature – you have to turn it on to get new warnings. You can do that either at the project level, or directly in the source code with a new directive:

#nullable enable

If you put this on the example above (e.g. at the top of the file) you’ll get a warning on this line:

    string s = (args.Length > 0) ? args[0] : null; // WARNING!

saying you shouldn’t assign the right-hand-side value to the string variable s because it might be null! Ordinary reference types have become non-nullable! You can fix the warning by giving a non-null value:

    string s = (args.Length > 0) ? args[0] : "";

If you want s to be able to be null, however, that’s fine too, but you have to say so, by using a nullable reference type – i.e. tagging a ? on the end of string:

    string? s = (args.Length > 0) ? args[0] : null;

Now the warning on that line goes away, but of course it shows up on the next line where you’re now passing something that you said may be null (a string?) to something that doesn’t want a null (a string):

    M(s); // WARNING!

Now again you can choose whether to change the signature of M (if you own it) to accept nulls or whether to make sure you don’t pass it a null to begin with.

C# is pretty smart about this. Let’s only call M if s is not null:

    if (s != null) M(s);

Now the warning disappears. This is because C# tracks the null state of variables across execution flow. In this case, even though s is declared to be a string?, C# knows that it won’t be null inside the true-branch of the if, because we just tested that.

In summary the nullable feature splits reference types into non-nullable reference types (such as string) and nullable reference types (such as string?), and enforces their null behavior with warnings.

This is enough of a primer for the purposes of this post. If you want to go deeper, please visit the docs on Nullable Reference Types, or check some of the earlier posts on the topic (Take C# 8.0 for a spin, Introducing Nullable Reference Types in C#).

There are many more nuances to how you can tune your nullable annotations, and we use a good many of them in our “nullification” of the .NET Core Libraries. The post Try out Nullable Reference Types explores those in great detail.

How and when to become “null-aware”?

Now to the meat of this post. When should you adopt nullable reference types? How to think about that? Here are some observations about the interaction between libraries and clients. Afterwards we propose a shared timeline for the whole ecosystem – the “nullable rollout phase” – to guide the adoption based on what you are building.

What happens when you enable nullable reference types in your code?

You will have to go over your signatures to decide in each place where you have a reference type whether to leave it non-nullable (e.g. string) or make it nullable (e.g. string?). Does your method handle null arguments gracefully (or even meaningfully), or does it immediately check and throw? If it throws on null you want to keep it non-nullable to signal that to your callers. Does your method sometimes return null? If so you want to make the return type nullable to “warn” your callers about it.

You’ll also start getting warnings when you use those members wrong. If you dereference the result of a method that returns string? and you don’t check it for null first, then you’ll have to fix that.

What happens when you call libraries that have the feature enabled?

If you yourself have the feature enabled and a library you depend on has already been compiled with the feature on, then it too will have nullable and nonnullable types in its signatures, and you will get warnings if you use those in the wrong way.

This is one of the core values of NRTs: That libraries can accurately describe the null behavior of the APIs, in a way that is checkable in client code at the call site. This raises expressiveness on API boundaries so that everyone can get a handle on the safe propagation and dereferencing of nulls. Nobody likes null reference exceptions or argument-null exceptions! This helps you write the code right the first time, and avoid the sources of those exceptions before you even compile and run the code.

What happens when you call libraries that have not enabled the feature?

Nothing! If a library was not compiled with the feature on, your compiler cannot assume one way or the other about whether types in the signatures were supposed to be nullable or not. So it doesn’t give you any warnings when you use the library. In nullable parlance, the library is “null-oblivious”. So even though you have opted in to getting the null checking, it only goes as far as the boundary to a null-oblivious library.

When that library later comes out in a new version that does enable the feature, and you upgrade to that version, you may get new warnings! All of a sudden, your compiler knows what is “right” and “wrong” in the consumption of those APIs, and will start telling you about the “wrong”!

This is good of course. But if you adopt NRTs before the libraries you depend on, it does mean that you’ll get some churn as they “come online” with their null annotations.

The nullable rollout phase

Here comes the big ask of you. In order to minimize the impact and churn, I want to recommend that we all think about the next year’s time until .NET 5 (November 2020) as the “nullable rollout phase”, where certain behaviors are encouraged. After that, we should be in a “new normal” where NRTs are everywhere, and everyone can use this feature to track and be explicit about nullability.

What should library authors do?

We strongly encourage authors of libraries (and similar infrastructure, such as code generators) to adopt NRTs during the nullable rollout phase. Pick a time that’s natural according to your shipping schedule, and that lets you get the work done, but do it within the next year. If your clients pester you to do it quicker, you can tell them “No! Go away! It’s still the nullable rollout phase!”

If you do go beyond the nullable rollout phase, however, your clients start having a point that you are holding back their adoption, and causing them to risk churn further down the line.

As a library writer you always face a dilemma between reach of your library and the feature set you can depend on in the runtime. In some cases you may feel compelled to split your library in two so that one version can target e.g. the classic .NET Framework, while a “modern” version makes use of e.g. new types and features in .NET Core 3.1.

However, with Nullable Reference Types specifically, you should be able to work around this. If you multitarget your library (e.g. in Visual Studio) to .NET Standard 2.0 and .NET Core 3.1, you will get the reach of .NET Standard 2.0 while benefitting from the nullable annotations of the .NET Core 3.1 libraries.

You also have to set the language version to C# 8.0, of course, and that is not a supported scenario when one of the target versions is below .NET Core 3.0. However, you can still do it manually in your project settings, and unlike many C# 8.0 features, the NRT feature specifically happens to not depend on specific elements of .NET Core 3.1. But if you try to use other language features of C# 8.0 while targeting .NET Standard 2.0, all bets are off!

What should library users do?

You should be aware that there’s a nullable rollout phase where things will be in flux. If you don’t mind the flux, by all means turn the feature on right away! It may be easier to fix bugs gradually, as libraries come online, rather than in bulk.

If you do want to save up the work for one fell swoop, however, you should wait for the nullable rollout phase to be over, or at least for all the libraries you depend on to have enabled the feature.

It’s not fair to nag your library providers about nullability annotations until the nullable rollout phase is over. Engaging them to help get it done, through OSS or as early adopters or whatever, is of course highly encouraged, as always.

What will Microsoft do?

We will also aim to be done with null-annotating our core libraries when .NET 5 comes around – and we are currently on track to do so. (Tracking issue: Annotate remainder of .NET Core assemblies for nullable reference types).

We will also keep a keen eye on the usage and feedback during this time, and we will feel free to make adjustments anywhere in the stack, whether library, compilers or tooling, in order to improve the experience based on what we hear. Adjustments, not sweeping changes. For instance, this and this issue were already addressed by this and this fix.

When .NET 5 rolls around, if we feel the nullable rollout phase has been a success, I could see us turning the feature on by default for new projects in Visual Studio. If the ecosystem is ready for it, there is no reason why any new code should ignore the improved safety and reliability you get from nullability annotations!

At that point, the mechanisms for opt-in and opt-out become effectively obsolete – a mechanism to deal with legacy code.

Call to action

Make a plan! How are you going to act on nullable reference types? Try it out! Turn it on in your code and see what happens. Scary many warnings? That may happen until you get your signatures annotated right. After that, the remaining warnings are about the quality of your consuming code, and those are the reward: an opportunity to fix the places where your code is probably not null safe!

And as always: Have fun exploring!

Happy hacking,

Mads Torgersen, C# lead designer

The post Embracing nullable reference types appeared first on .NET Blog.

Azure IoT Tools November Update: standalone simulator for Azure IoT Edge development and more!

$
0
0

Welcome to the November update of Azure IoT Tools!

In this November release, you will see the new standalone simulator for Azure IoT Edge development, the support of Vcpkg for IoT Plug and Play development and more new features.

Deploy Event Grid module on Azure IoT Edge

Event Grid on IoT Edge brings the power and flexibility of Azure Event Grid to the edge for all pub/sub and event driven scenarios. There are several ways to deploy Event Grid module in VS Code.

1. When adding a new module to your new or existing IoT Edge solution, now there is a new option to choose Azure Event Grid

2. When adding a new module to your new or existing IoT Edge solution, select Module from Azure Marketplace, you can see Azure Event Grid on IoT Edge.

3. In VS Code command palette, type and select Azure IoT Edge: Show Sample Gallery. You can open a new sample with pub/sub Functions along with Event Grid module.

Click here to learn more about Azure Event Grid on IoT Edge.

Standalone simulator for Azure IoT Edge development

For Azure IoT Edge developers, we have Azure IoT EdgeHub Dev Tool to provide a local development experience with a simulator for creating, developing, testing, running, and debugging Azure IoT Edge modules and solutions. However, the Azure IoT EdgeHub Dev Tool runs on top of Python environment. Not every Azure IoT Edge developers especially those using Windows as development environment has Python and Pip installed. Therefore, we have shipped a standalone simulator for Azure IoT EdgeHub Dev Tool so that developers who use Windows as development environment no longer need to setup Python environment. The standalone simulator has already been integrated in the latest release of Azure IoT Tools for Visual Studio Code. When you use Azure IoT Tools for Visual Studio Code,

Support Vcpkg for IoT Plug and Play development

Vcpkg is a cross-platform library manager that helps you manage C and C++ libraries on Windows, Linux and MacOS. With the support of Vcpkg for IoT Plug and Play development, developers could easily leverage the Vcpkg to manage the Azure IoT C device SDK as well as other C/C++ dependencies.

Previously, source code is the only way to include the Azure IoT C device SDK. Now, developers could generate device code stub of IoT Plug and Play via both Vcpkg and source code.

For more details with the step-by-step instructions, you can check out this tutorial to see how to create an IoT Plug and Play device via Vcpkg.

Configure an Embedded Linux C project using containerized device toolchain

We release the preview experience of containerized toolchain months ago aiming to simplify the toolchain acquisition efforts for device developers working on C / C++ project for Embedded Linux that requires the cross-compiling toolchain, device SDK and dependent libraries set up properly. Instead of doing this on local machine, which could lead to a messed-up environment, we provided a couple of common container images for devices with various architectures (e.g. ARMv7, ARM64 and x86).

And now you can further use this feature by configuring an existing C / C++ project you have to be able to compile in the container, and then deploy to the target device you use. If you want to further customize the container, we provided with extra device libraries and packages that are required for your device.

Check the tutorials to learn how to use it for your existing code base.

Try it out

Please don’t hesitate to give it a try and if you’re new to Azure, remember you can sign up for a free Azure account to get $200 free Azure credit and access to over 25 always free services (including Azure IoT Hub)! If you have any feedback, feel free to reach us at https://github.com/microsoft/vscode-azure-iot-tools/issues. We will continuously improve our IoT developer experience to empower every IoT developers on the planet to achieve more!

The post Azure IoT Tools November Update: standalone simulator for Azure IoT Edge development and more! appeared first on Visual Studio Blog.

Top Stories from the Microsoft DevOps Community – 2019.11.29

$
0
0

While our American colleagues are busy enjoying their Thanksgiving break, I wanted to post about something I’m extremely thankful for. No not the two days without any meetings this week (although that was awesome), but the incredible DevOps community building exciting things with the help of Azure.

iot hackdays

Open Source Cloud Summit Johannesburg – IoT Edge Lab

While folks in the US were busy eating pumpkin pie and fixing their relatives laptops on Thanksgiving, the community on Johannesburg were holding an Open Cloud Summit. Some amazing posts coming out of the #OSSSummitJHB hashtag, but my personal favorite was the Azure IoT Edge Hands On Lab from MVP Allan Pead. Allan has ran this lab at a couple of IoT Hackdays this months and I’m very jealous – definitely want to give it a go. In this case you learn how to do CI/CD to a Raspberry Pi based robot using Azure Pipelines. For more information take a look at the Hands On Lab repo on GitHub.

100 Days of Infrastructure as Code in Azure

Ryan Irujo, Pete Zerger and Tao Yang have been learning different areas of Infrastructure as Code in Azure and this week they have been digging more into YAML Pipelines. It’s definitely worth following along with them by adding a watch on their GitHub repo so that you get notified of changes. (Also don’t forget to sign up for the beta of the new GitHub Mobile app if you want to manage your notifications on the go)

How to Configure CI/CD in Azure DevOps

Over on the excellent Redgate Hub sysadmin blog, Joydip Kanjilal posted a very comprehensive run though of the process setting up a basic CI/CD pipeline for a .NET Core app with Visual Studio 2019, Azure Pipelines and Azure. While it’s a demo I do often and there is plenty of help available for, it’s great to see such a simple and detailed walk-through of this ‘bread and butter’ pipeline but aimed at the community of sysadmins. While you are there, be sure to check out the excellent Redgate extensions for Azure DevOps which make doing CD with SQL Server databases a lot easier.

Use GitHub Actions to deploy code to Azure

Popular tech columnist, Simon Bisson, wrote up how to use the new GitHub Actions for Azure to deploy straight from GitHub to your Azure service of choice. After reading his article, if you want to learn more about the GitHub Actions for Azure, check-out the blog post from last week – note that there is even an action to trigger Azure Pipelines which can come in handy should you want to do your CI build using GitHub Actions and then trigger a release using Azure Pipelines.

3 Ways to run Automated Tests on Azure DevOps

On the TechFabric blog, Seleznov Ihor has posted a deep-dive into three ways to run automated tests in Azure Pipelines, Unit tests, UI tests and API tests in this case with a .NET Core application.

Continuous Infrastructure in GCP using Azure Pipelines

Ashish Raj has been on a roll lately with Azure DevOps content and this week was no different with a great look into using GCP with Azure Pipelines and Terraform. His short (15m) video on YouTube is well worth a watch if multi-cloud deployments with Terraform is something you are looking into.

The Unicorn Project

Last but not least, one final thing to be thankful for is that Gene Kim‘s latest book, The Unicorn Project is now available. Like with The Phoenix Project, Gene explains how DevOps principals work in practice using a fictional narrative that works really well and keeps you engaged. This time the story of Parts Unlimited is told from the position of the engineering teams on the ground facing hard choices and trying to do the right thing while facing difficult deadlines and fighting for the very survival of the business. Many of the incidents and scenarios ring true from my time as a consultant (the mention of CSV BOM’s made me shiver thinking about the time that tripped me up) but also times even here at Microsoft where we’ve let technical debt build up and had to recognize that fact and pay it back down. I would encourage everyone to read the book and buy several copies for folks on your team as you’ll quickly find yourself looking at situations at work and thinking ‘What Would Maxine Do’. The term ‘digital transformation’ can be overused and full of buzzwords – but this book does a great job of explaining what it actually means and what it feels like to go through it. Even better as it’s a narrative the audio book version works really well too and is narrated by the award winning professional actor/producer Frankie Corzo, making is a great listen on the go.

Enjoy the rest of the holiday weekend if you are in the US. Don’t forget, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.11.29 appeared first on Azure DevOps Blog.

Application Gateway Ingress Controller for Azure Kubernetes Service

$
0
0

Today we are excited to offer a new solution to bind Azure Kubernetes Service (AKS) and Application Gateway. The new solution provides an open source Application Gateway Ingress Controller (AGIC) for Kubernetes, which makes it possible for AKS customers to leverage Application Gateway to expose their cloud software to the Internet.

Bringing together the benefits of the Azure Kubernetes Service, our managed Kubernetes service, which makes it easy to operate advanced Kubernetes environments and Azure Application Gateway, our native, scalable, and highly available, L7 load balancer has been highly requested by our customers.

How does it work?

Application Gateway Ingress Controller runs in its own pod on the customer’s AKS. Ingress Controller monitors a subset of Kubernetes’ resources for changes. The state of the AKS cluster is translated to Application Gateway specific configuration and applied to the Azure Resource Manager. The continuous re-configuration of Application Gateway ensures uninterrupted flow of traffic to AKS’ services. The diagram below illustrates the flow of state and configuration changes from the Kubernetes API, via Application Gateway Ingress Controller, to Resource Manager and then Application Gateway.

Much like the most popular Kubernetes Ingress Controllers, the Application Gateway Ingress Controller provides several features, leveraging Azure’s native Application Gateway L7 load balancer. To name a few:

  • URL routing
  • Cookie-based affinity
  • Secure Sockets Layer (SSL) termination
  • End-to-end SSL
  • Support for public, private, and hybrid web sites
  • Integrated web application firewall

agic2

The architecture of the Application Gateway Ingress Controller differs from that of a traditional in-cluster L7 load balancer. The architectural differences are shown in this diagram:

clip_image003

  • An in-cluster load balancer performs all data path operations leveraging the Kubernetes cluster’s compute resources. It competes for resources with the business apps it is fronting. In-cluster ingress controllers create Kubernetes Service Resources and leverage kubenet for network traffic. In comparison to Ingress Controller, traffic flows through an extra hop.
  • Ingress Controller leverages the AKS’ advanced networking, which allocates an IP address for each pod from the subnet shared with Application Gateway. Application Gateway has direct access to all Kubernetes pods. This eliminates the need for data to pass through kubenet. For more information on this topic see our “Network concepts for applications in Azure Kubernetes Service” article, specifically “Comparing network models” section.

Solution performance

As a result of Application Gateway having direct connectivity to the Kubernetes pods, the Application Gateway Ingress Controller can achieve up to 50 percent lower network latency vs in-cluster ingress controllers. Application Gateway is a managed service, backed by Azure virtual machine scale sets. As a result, Application Gateway does not use AKS compute resources for data path processing. It does not share or interfere with the resources allocated to the Kubernetes deployment. Autoscaling Application Gateway at peak times, unlike an in-cluster ingress, will not impede the ability to quickly scale up the apps’ pods. And of course, switching from in-cluster L7 ingress to Application Gateway will immediately decrease the compute load used by AKS.

We compared the performance of an in-cluster ingress controller and Application Gateway Ingress Controller on a three node AKS cluster with a simple web app running 22 pods per node. A total of 66 web app pods shared resources with three in-cluster ingresses – one per node. We configured Application Gateway with an instance count of two. We used Apache Bench to create a total of 100K requests with concurrency set at 3K requests. We launched Apache Bench twice: once pointing it to the SLB fronting the in-cluster ingress controller, and a second time connecting to the public IP of Application Gateway. On this very busy AKS cluster we recorded the mean latency across all requests:

  • Application Gateway: 480ms per request
  • In-cluster Ingress: 710ms per request

As proven by the data gathered above, under heavy load, the in-cluster ingress controller has approximately 48 percent higher latency per request compared to Application Gateway ingress. Running the same benchmark on the same cluster but with two web app pods per node, a total of six pods, we observed the in-cluster ingress controller performing with approximately 17 percent higher latency than Application Gateway.

What’s next?

Application Gateway Ingress Controller is now stable and available for use in production environments. The project is maturing quickly, and we are working actively to add new capabilities. We are working on enhancing the product with features that customers have been asking for, such as using certificates stored on Application Gateway, mutual TLS authentication, gRPC, and HTTP/2. We invite you to try the new Application Gateway Ingress Controller for AKS, follow our progress, and most importantly - give us feedback on GitHub.

Azure Cost Management updates – November 2019

$
0
0

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Azure Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Let's dig into the details.

Cost Management now available for Cloud Solution Providers

In case you missed it, as of November 1, Cloud Solution Provider (CSP) partners can now see and manage costs for their customers using Azure Cost Management in the Azure portal by transitioning them to Azure plan subscriptions via Microsoft Customer Agreement. Partners can also enable Azure Cost Management for customers to allow them to see and manage the cost of their subscriptions.

If you're working with a CSP partner to manage your Azure subscriptions, talk to them about getting you onboarded and your subscriptions switched over to the new Azure plan using Microsoft Customer Agreement. Not only will this allow you to see and manage costs in the Azure portal, but you'll also be able to use some Azure services that aren't currently available to your classic CSP subscriptions. As an example, some organizations have dependencies on external solutions that still require classic services, including virtual machines. To work around this, organizations are creating separate pay-as-you-go subscriptions for those resources. This adds additional overhead to manage separate billing accounts with Microsoft and your partner. Once you've switched over to Azure plan subscriptions, you may be able to consolidate any existing CSP and non-CSP subscriptions into a single billing account, managed by your partner. In general, you'll have the same benefits and offerings at the same time as everyone else using Microsoft Customer Agreement. Make sure you talk to your partner today!

If you're a CSP provider, enabling Cost Management for your customers involves three steps:

  1. Confirm acceptance of the Microsoft Customer Agreement on behalf of your customers
    Present the Microsoft Customer Agreement to your customers and, once they've agreed, confirm the customer's official acceptance in Partner Center or via the API/SDK.
  2. Transition your customers to Azure plan
    The last step for you, as the partner, to see and manage cost in the Azure portal is to transition existing CSP offers to an Azure plan. You'll need to do this once for each reseller and direct customer.
  3. Enable Azure Cost Management for your customers
    In order for your customers to see and manage costs in Azure Cost Management, they need to have access to view charges for their subscriptions. This can be enabled from the Azure portal for each customer and shows them their cost based on pay-as-you-go prices and do not include partner discounts or any discounts you may offer. Please ensure your customers understand the cost will not match your invoice if you offer additional discounts or use custom prices.

To learn more about what you'll see after enabling Azure Cost Management for your customers, read Get started with Azure Cost Management for partners.

What's new in Cost Management Labs

With Cost Management Labs, you get a sneak peek at what's coming in Azure Cost Management and can engage directly with us to share feedback and help us better understand how you use the service so we can deliver more tuned and optimized experiences. Here are a few features you can see in Cost Management Labs:

  • Get started quicker with the cost analysis Home view
    Cost Management offers five built-in views to get started with understanding and drilling into your costs. The Home view gives you quick access to those views so you get to what you need faster.
  • Performance optimizations in cost analysis and dashboard tilesNow available in the public portal
    Whether you're using tiles pinned to the dashboard or the full experience, you'll find cost analysis loads faster than ever.
  • NEW: Show views name on pinned cost analysis tilesNow available in the public portal
    When you pin cost analysis to the dashboard, it now shows the name of the view you pinned. To change it, simply save the view with the desired name and pin cost analysis again!
  • NEW: Quick access to cost analysis help and supportNow available in the public portal
    Have a question? Need help? The quickstart tutorial is now one click away in cost analysis. And if you run into an issue, create a support request from cost analysis to send additional context to help you submit and resolve your issue quicker than ever.
    Use the 'Quickstart tutorial' command at the top of cost analysis to see documentation and 'New support request' to create a support request with additional context to resolve your issue quicker

Of course, that's not all. Every change in Cost Management is available in Cost Management Labs a week before it's in the full Azure portal. We're eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today.

Customizing the name on dashboard tiles

You already know you can save and share views in cost analysis. You'll typically start by saving a customized view in cost analysis so others can use it. You might share a link so they can jump directly into the view from outside the portal or share an image of the view to include in an email or presentation. But if you really want to keep an eye on specific perspectives of your cost every time you sign in to the portal, the best option is to pin your view to the dashboard.

Azure portal dashboard with tiles for all the built-in views available in Azure Cost Management

Pinning is easy: Just click the pin icon in the top-right corner of cost analysis and you're done. When you pin your view, the tile shows the name of your view, the scope it represents, and the main chart or table from cost analysis. If you have an older tile you need to rename, open it in cost analysis, click Save as to change the name of the view, then pin it again.

Enjoy and let us know what you'd like to see next!

Upcoming changes to Azure usage data

Many organizations use the full Azure usage and charges to understand what's being used, identify what charges should be internally billed to which teams, and to look for opportunities to optimize costs with Azure reservations and Azure Hybrid Benefit. If you're doing any analysis or setup integration based on product details in the usage data, please update your logic for the following services. All of the following changes will start effective December 1:

Also, remember the key-based EA billing APIs have been replaced by new Azure Resource Manager APIs. The key-based APIs will still work through the end of your enrollment, but will no longer be available when you renew and transition into Microsoft Customer Agreement. Please plan your migration to the latest version of the UsageDetails API to ease your transition to Microsoft Customer Agreement at your next renewal.

Save up to 72 percent with Azure reservations – now available for 16 services

Azure reservations help you save up to 72% compared to pay-as-you-go rates when you commit to one or three years of usage. You may know Azure Advisor tells you when you can save money with virtual machine reservations, but did you know with the addition of six new services, you can now purchase reservations for a total of 16 services? Here's the full list as of today:

  • Virtual machines and managed disks
  • Blob storage
  • App Service
  • SQL database and data warehouse
  • Azure Database for MySQL, MariaDB, and PostgreSQL
  • Cosmos DB
  • Data Explorer
  • Databricks
  • SUSE and Red Hat Linux
  • Azure Red Hat OpenShift
  • Azure VMWare solution by CloudSimple

What services would you like to see next? Learn more about Azure reservations and start saving today!

New videos

If you weren't able to make it to Microsoft Ignite 2019 or didn't catch the Azure Cost Management sessions, they're now available online and open for everyone:

If you're looking for something a little shorter, you can also check out these videos:

Subscribe to the Azure Cost Management YouTube channel to stay in the loop with new videos as they're released and let us know what you'd like to see next.

Documentation updates

There were many documentation updates. Here are a few you might be interested in:

Want to keep an eye on all of the documentation updates? Check out the Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select Edit at the top of the document and submit a quick pull request.

What's next?

These are just a few of the big updates from last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>