Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Lighting up my DasKeyboard with Blood Sugar changes using my body’s REST API

$
0
0

imageI've long blogged about the intersection of diabetes and technology. From the sad state of diabetes tech in 2012 to its recent promising resurgence, it's clear that we are not waiting.

If you're a Type 1 Diabetic using a CGM - a continuous glucose meter - you'll want to set up Nightscout so you can have a REST API for your sugar. The CGM checks my blood sugar every 5 minutes, it hops via BLE over to my phone and then to the cloud. You'll want your sugars stored in cloud storage that YOU control. CGM vendors have their own cloud, but we can easily bridge over to a MongoDB database.

I run Nightscout in Azure and my body has a REST API. I can do an HTTP GET like this:

/api/v1/entries.json?count=3

and get this

[

{
_id: "5c6066d477b2a69a0a7810e5",
sgv: 143,
date: 1549821626000,
dateString: "2019-02-10T18:00:26.000Z",
trend: 4,
direction: "Flat",
device: "share2",
type: "sgv"
},
{
_id: "5c6065a877b2a69a0a7801ce",
sgv: 134,
date: 1549821326000,
dateString: "2019-02-10T17:55:26.000Z",
trend: 4,
direction: "Flat",
device: "share2",
type: "sgv"
},
{
_id: "5c60647b77b2a69a0a77f381",
sgv: 130,
date: 1549821026000,
dateString: "2019-02-10T17:50:26.000Z",
trend: 4,
direction: "Flat",
device: "share2",
type: "sgv"
}
]

I can change the URL from a .json to a .txt and get this

2019-02-10T18:00:26.000Z    1549821626000    143    Flat    

2019-02-10T17:55:26.000Z 1549821326000 134 Flat
2019-02-10T17:50:26.000Z 1549821026000 130 Flat

The "flat" value at the end is part of an enum that can give me a generalized trend value. Diabetics need to manage our sugars at the least hour by house and sometimes minute by minute. As such it's super important that we have "glanceable displays." That means anything at all that gives me a sense (a sixth sense, if you will) of how I'm doing.

That might be:

I got a Das Keyboard 5Q recently - I first blogged about Das Keyboard in 2006! and noted that it's got it's own local REST API. I'm working on using their Das Keyboard Q software's Applet API to light up just the top row of keys in response to my blood sugar changing. It'll use their Node packages and JavaScript and run in the context of their software.

However, since the keyboard has a localhost REST API and so does my blood sugar, I busted out this silly little shell script. Add a cron job and my keyboard can turn from orange (low), to green, yellow, red (high) as my sugar changes. That provides a nice ambient notifier of how my sugars are doing. Someone on Twitter said "who looks at their keyboard?" I mean, OK, that's just silly. If my entire keyboard turns run I will notice it. Again, ambient. I could certainly add an alert and make a klaxon go off if you'd like.

#!/bin/sh

# This script colorize all LEDs of a 5Q keyboard
# by sending JSON signals to the Q desktop public API.
# based on Blood Sugar values from Nightscout
set -e # quit on first error.
PORT=27301

# Colorize the 5Q keyboard
PID="DK5QPID" # product ID

# Zone are LED groups. There are less than 166 zones on a 5Q.
# This should cover the whole device.
MAX_ZONE_ID=166

# Get blood sugar from Nightscout as TEXT
red=#f00
green=#0f0
yellow=#ff0
#deep orange is LOW sugar
COLOR=#f50
bgvalue=$(curl -s https://MYSITE/api/v1/entries.txt?count=1 | grep -Eo '000s([0-9]{1,3})+s' | cut -f 2)
if [ $bgvalue -gt 80 ]
then
COLOR=$green
if [ $bgvalue -gt 140 ]
then
COLOR=$yellow
if [ $bgvalue -gt 200 ]
then
COLOR=$red
fi
fi
fi

echo "Sugar is $bgvalue and color is $COLOR!"

for i in `seq $MAX_ZONE_ID`
do
#echo "Sending signal to zoneId: $i"
# important NOTE: if field "name" and "message" are empty then the signal is
# only displayed on the devices LEDs, not in the signal center
curl -s -S --output /dev/null -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' -d '{
"name": "Nightscout",
"id": "'$i'",
"message": "Blood sugar is '$bgvalue'",
"pid": "'$PID'",
"zoneId": "'"$i"'",
"color": "'$COLOR'",
"effect": "SET_COLOR"

}' "http://localhost:$PORT/api/1.0/signals"

done
echo "nDone.n"

This local keyboard API is meant to send a signal to a single zone or key, so it's hacky of me (and them, really) to make 100+ REST calls to color the whole keyboard. But, it's a localhost call and it's not that spendy. This will go away when I move to their new API. Here's a video of it working.

You can also hit the volume button on the keyboard an any "signaled" (lit up) key and get a popup with the actual blood sugar value (that's 'message' in the second curl command above). Again, this is a hack but I'm going to make it a formal applet you can just install from the store. If you want to help (I'm slow) head to the code here https://github.com/shanselman/DasKeyboard-Q-NightScout

Got my keyboard keys changing color *when my blood sugar goes up!* @daskeyboard @NightscoutProj #WeAreNotWaiting #diabetes pic.twitter.com/DSBDcrO7RE

— Scott Hanselman (@shanselman) February 8, 2019

What are some other good ideas for ambient sugar alerts? An LCD strip around the monitor (bias lighting)? A Phillips Hue smart light?

Consider also that you could use the glanceable display idea for pulse, anxiety, blood pressure - anything in your body you could hook up to in real- or near-realtime.


Sponsor: Get the latest JetBrains Rider with Code Vision, Rename Project refactoring, and the Assembly Explorer. Improved support for C#, VB.NET, F#, TypeScript, and Angular is all included.



© 2018 Scott Hanselman. All rights reserved.
     

Get started quickly using templates in Azure Data Factory

$
0
0

Cloud data integration helps organizations integrate data of various forms and unify complex processes in a hybrid data environment. A number of times different organizations have similar data integration needs and require repeat business processes. Data Engineers or data developers in these organizations want to quickly get started with building data integration solutions and avoid building same workflows repeatedly. Today, we are announcing the support for templates in Azure Data Factory (ADF) to get started quickly with building data factory pipelines and improve developer productivity along with reducing development time for repeat processes. The template feature enables a ‘Template gallery’ for our customers that contains use-case based templates, data movement templates, SSIS templates or transformation templates that you can use to get hands-on with building your data factory pipelines.

Simply click Create pipeline from template on the Overview page or click +-> Pipeline from template on the Author page in your data factory UX to get started.

2019-02-04_12h18_28

2019-02-04_12h19_45

Select any template from the gallery and provide the necessary inputs to use the template. You can also read detailed description about the template or visualize the end to end data factory pipeline.

2019-02-04_12h29_30

2019-02-04_12h30_49

You can also create new connections to your data store or compute while providing the template inputs.

2019-02-04_12h32_47

Once you click Use this template, you are taken to the template validation output. This guides you to fill in the required properties needed to publish and run the pipeline created from the template.

2019-02-04_12h35_58

In addition to using out of box templates from the Template gallery, you might want to save your existing pipelines as templates as well. This might be required if different business units within your organization want to use the same pipeline but with different inputs. The templates feature in data factory allows you to save your existing pipelines as templates as well.

2019-02-04_12h43_07

Ability to save your pipeline as a template requires you to enable GIT integration (Azure Dev Ops GIT or GitHub) in your data factory.

2019-02-04_12h44_20

The template is then saved in your GIT repo under the templates folder.

2019-02-04_12h45_56

2019-02-04_12h49_24

The template is now visible to anyone who has access to your GIT repo. This template can be seen in the Templates section of the resource explorer.

2019-02-04_12h54_51

You can also see the template under the My templates section in the template gallery.

2019-02-04_12h56_01

Saving the template to the Git repository generates two files. It generates an ARM template along with a manifest file that is saved in your Git repo. The ARM template contains all information about your data factory pipeline, including pipeline activities, linked services, datasets etc. The manifest file contains information about the template description, template author, template tile icons and other metadata about the template.

2019-02-04_13h00_18

All the ARM template and manifest files for the out of box official templates provided in the Template gallery can be seen in the official data factory GitHub location. In future, we will be working with our partners to come up with a certification process wherein anyone can submit a template that they want to enable in the Template gallery. The data factory team will certify the pull request corresponding to the submitted template and make the submitted template available in the Template gallery.

Find more information about the templates feature in data factory.

Our goal is to continue adding features to improve the usability of Data Factory tools. Get started building pipelines easily and quickly using Azure Data Factory. If you have any feature requests or want to provide feedback, please visit the Azure Data Factory forum.

Azure.Source – Volume 69

$
0
0

Now in preview

Account failover now in public preview for Azure Storage

If you want to control storage account failover so that you can determine when storage account write access is required and the secondary replication state is understood, account failover for geo-redundant storage (GRS) enabled storage accounts is now available in preview. If the primary region for your geo-redundant storage account becomes unavailable for an extended period of time, you can force an account failover. As is the case with most previews, account failover should not be used with production workloads. There is no production SLA until the feature becomes generally available.

Azure Stream Analytics now supports Azure SQL Database as reference data input

Reference data is a dataset that is static or slow changing in nature that you can correlate with real-time data streams to augment the data. Stream Analytics leverages versioning of reference data to augment streaming data by the reference data that was valid at the time the event was generated. You can try using Azure SQL Database as a source of reference data input to your Stream Analytics job today. This feature is available for public preview in all Azure regions. This feature is also available in the latest release of Stream Analytics tools for Visual Studio.

Diagram showing Stream Analytics with Streaming Sources and Hot path analytics

Also in preview

Now generally available

Individually great, collectively unmatched: Announcing updates to 3 great Azure Data Services

Azure Data Lake Storage Gen2 and Azure Data Explorer are now generally available, while Azure Data Factory Mapping Data Flow is available in preview. Azure Data Lake Storage (ADLS) combines the scalability, cost effectiveness, security model, and rich capabilities of Azure Blob Storage with a high-performance file system that is built for analytics and is compatible with the Hadoop Distributed File System. Azure Data Explorer (ADX) is a fast, fully managed data analytics service for real-time analysis on large volumes of streaming data. ADX is capable of querying 1 billion records in under a second with no modification of the data or metadata required. ADX also includes native connectors to Azure Data Lake Storage, Azure SQL Data Warehouse, and Power BI and comes with an intuitive query language so that customers can get insights in minutes. With Mapping Data Flow in Azure Data Factory, customers can visually design, build, and manage data transformation processes without learning Spark or having a deep understanding of their distributed infrastructure.

An overview of Azure Data Explorer (ADX) | Azure Friday

Manoj Raheja joins Lara Rubbelke to demonstrate Azure Data Explorer (ADX) and provide an overview of the service from provisioning to querying. ADX is a fast, fully managed data analytics service for real-time analysis on large volumes of streaming data. It brings together a highly performant and scalable cloud analytics service with an intuitive query language to deliver near-instant insights.

Code-free data transformation at scale using Azure Data Factory | Azure Friday

Learn about the new code-free visual data transformation capabilities in Azure Data Factory as Gaurav Malhotra joins Lara Rubbelke to demonstrate how you can visually design, build, and manage data transformation processes without learning Spark or having a deep understanding of the distributed infrastructure.

Azure Cost Management now general availability for enterprise agreements and more!

Azure Cost Management is now generally available for all our Enterprise Agreement (EA) customers from within the Azure Portal. Azure Cost Management enables you to monitor all you spend through easy to use dashboards, create budgets, and optimize your cloud spend. This post also announces the public preview for web direct Pay-As-You-Go customers and Azure Government cloud. Azure Cost Management is available for free to all customers and partners to manage Azure costs. The Cloudyn portal will continue to be available to customers while we integrate all relevant functionality into native Azure Cost Management.

Microsoft Healthcare Bot brings conversational AI to healthcare

The Microsoft Healthcare Bot is a white-labeled cloud service that powers conversational AI for healthcare. It’s designed to empower healthcare organizations to build and deploy compliant, AI-powered virtual health assistants and chatbots that help them put more information in the hands of their users, enable self-service, drive better outcomes, and reduce costs. The Microsoft Healthcare Bot is now available in the Azure Marketplace.

Also generally available

News and updates

Analytics in Azure is up to 14x faster and costs 94% less than other cloud providers. Why go anywhere else?

Julia White, Corporate Vice President, Microsoft Azure covers how Azure provides the most comprehensive set of analytics services from data ingestion to storage to data warehousing to machine learning and BI. Each of these services have been finely tuned to provide industry leading performance, security and ease of use, at unmatched value. In a recent study by GigaOm, they found that Azure SQL Data Warehouse is now outperforming the competition up to 14x and up to 94% cheaper when compared with our competitors.

Chart showing price performance comparison between Azure SQL DW, Amazon Redshift, and Google BigQuery

Configure resource group control for your Azure DevTest Lab

You now have the option to configure all your lab virtual machines (VMs) to be created in a single resource group. Learn how you can improve governance of your development and test environments by using Azure polices that you can apply at the resource group level. This enables you to use a script to either specify a new or existing resource group within your Azure subscription in which to create all your lab VMs. ARM environments created in your lab will continue to remain in their own resource groups and will not be affected by any option you select while working with this API.

Reserved instances now applicable to classic VMs, cloud services, and Dev/Test subscriptions

Two new Azure Reserved VM Instances’ (RI) features are now available that can provide you with additional savings and purchase controls. Classic VMs and Cloud Services users can now benefit from the RI discounts. In addition, Enterprise Dev/Test and Pay-As-You-Go Dev/Test subscriptions can now benefit from the RI discounts.

New connectors added to Azure Data Factory empowering richer insights

Azure Data Factory (ADF) is a fully-managed data integration service for analytic workloads in Azure, that empowers you to copy data from 80 plus data sources with a simple drag-and-drop experience. Also, with its flexible control flow, rich monitoring, and CI/CD capabilities you can operationalize and manage the ETL/ELT flows to meet your SLAs. A set of eight new Azure Data Factory connectors are now available that enable more scenarios and possibilities for your analytic workloads, including the ability to ingest data from Google Cloud Storage and Amazon S3.

Intelligent Edge support grows – Azure IoT Edge now available on virtual machines

Azure IoT Edge enables you to bring cloud intelligence to the edge and act immediately on real-time data. Azure IoT Edge already supports a variety of Linux and Windows operating systems as well as a spectrum of hardware from devices smaller than a Raspberry Pi to servers. Supporting IoT Edge in VMware vSphere offers you even more choice if you want to run AI on infrastructure you already own. VMware simplified the deployment process of Azure IoT Edge to VMs using VMware vSphere. Additionally, vSphere 6.7 and later provide passthrough support for Trusted Platform Module (TPM), allowing Azure IoT Edge to maintain its industry leading security framework by leveraging the hardware root of trust.

Completers in Azure PowerShell

Since version 3.0, PowerShell has supported applying argument completers to cmdlet parameters. We added argument completers to the Azure PowerShell modules that allow you to select valid parameter values without needing to make additional calls to Azure. These completers make the required calls to Azure to obtain the valid parameter values. Argument completers added include: Location, Resource Group Name, Resource Name, and Resource Id.

Screenshot showing Location Completer in Azure PowerShell

Simplify Always On availability group deployments on Azure VM with SQL VM CLI

Always On availability groups (AG) provide high availability and disaster recovery capabilities to your SQL Server database, whether on-premises, in the cloud, or a combination of both. Deploying an Always On Availability Group configuration for SQL Server on Azure VM is now possible with a few simple steps using the expanded capabilities enabled by SQL VM resource provider and Azure SQL VM CLI.

Help us shape new Azure migration capabilities: Sign up for early access!

We are enhancing Azure Migrate to deliver a unified and extensible migration experience with a goal of enabling customers and partners to plan, execute, and track their end to end migration journey using Azure Migrate. Delivering an integrated end-to-end migration experience that enables you to discover, assess, and migrate servers to Azure is the goal. Sign up for the private preview to try enhanced assessment and migration capabilities.

Microsoft Azure portal February 2019 update

In February, the Azure portal will bring you updates to several compute (IaaS) resources, the ability to export contents of lists of resources and resource groups as CSV files, an improvement to the layout of essential properties on overview pages, enhancements to the experience on recovery services pages, and expansions of setting options in Microsoft Intune.

Screenshot of disaster recovery experience in the Azure portal

Azure Monitor January 2019 updates

Azure Monitor now integrates the capabilities of Log Analytics and Application Insights for powerful, end-to-end monitoring of your applications. Learn what was added throughout the month of January to Application Insights, Log Analytics, Azure Monitor Workbooks, and Azure Metrics. In addition, Workbooks are now available in Azure Monitor for VMs.

Lighting up healthcare data with FHIR: Announcing the Azure API for FHIR

The healthcare industry is rapidly adopting the emerging standard HL7 FHIR®, or Fast Healthcare Interoperability Resources. This robust, extensible data model standardizes semantics and data exchange so all systems using FHIR can work together. Azure API for FHIR® enables rapid exchange of data in the FHIR format and is backed by a managed Platform-as-a Service (PaaS) offering in the cloud. Simplify data management with a single, consistent solution for protected health information.

Azure DevOps Projects supporting Azure Cosmos DB and Azure Functions

In the latest deployment of Azure DevOps Projects now available to all customers, we have added support for Azure Cosmos DB and Azure Functions as target destinations for your application. This builds on the existing Azure App Service, Azure SQL Database, and Azure Kubernetes Service (AKS) support.

Screenshot of choosing an application framework in DevOps Project

Find out when your virtual machine hardware is degraded with Scheduled Events

Scheduled Events will now be triggered when Azure predicts that hardware issues will require a redeployment to healthy hardware in the near future, and provide a time window when Azure will redeploy the VMs to healthy hardware if a live migration was not possible. You can initiate the redeployment of your VMs ahead of Azure automatically doing it.

Additional updates

Technical content

Processing trillions of events per day with Apache Kafka on Azure

The Siphon team shares their experiences and learnings from running one of world’s largest Kafka deployments. Besides underlying infrastructure considerations, they discuss several tunable Kafka broker and client configurations that affect message throughput, latency and durability. After running hundreds of experiments, they standardized the Kafka configurations required to achieve maximum utilization for various production use cases. They also explain how to tune a Kafka cluster to configure producers, brokers and consumers for the best possible performance.

Best practices to consider before deploying a network virtual appliance

A network virtual appliance (NVA) is a virtual appliance primarily focused on network functions virtualization. A typical network virtual appliance involves various layers four to seven functions like firewall, WAN optimizer, application delivery controllers, routers, load balancers, IDS/IPS, proxies, SD-WAN edge, and more. Common best practices include: Azure-accelerated networking support, multi-NIC support, using Azure Load Balancer for a high availability (HA) port load balancing rule, and support for Virtual Machine Scale Sets (VMSS).

Build your own deep learning models on Azure Data Science Virtual Machines

The Practical Deep Learning for Coders 2019 course from fast.ai helps software developers start building their own state-of-the-art deep learning models. Developers who complete this course will become proficient in deep learning techniques in multiple domains including computer vision, natural language processing, recommender algorithms, and tabular data. Learn how you can run this course using the Azure Data Science Virtual Machines (DSVM).

Screenshot from fast.ai notebook

Performance best practices for using Azure Database for PostgreSQL – Connection Pooling

This blog is a continuation of a series of blog posts to share best practices for improving performance and scale when using Azure Database for PostgreSQL service. This post focuses on the benefits of using connection pooling and provides recommendations to improve connection resiliency, performance, and scalability of applications running on Azure Database for PostgreSQL.

Azure Event Grid: The Whole Story

As promised, Jeremy Likness, a Microsoft Cloud Advocate, takes a thorough look at the serverless backbone for all your event-driven computing needs: Azure Event Grid, a single service for managing routing of all events from any source to any destination.

Pentesting Azure — Thoughts on Security in Cloud Computing

Tanya Janca, a Microsoft Cloud Advocate, shares a list of her thoughts on penetration testing (pentesting) Azure applications as she sets out to read Pentesting Azure Applications by Matt Burrough. She promises a future post once she finishes reading the book.

Azure shows

Episode 265 - Azure DevOps Server | The Azure Podcast

Cynthia and Evan talk to Jamie Cool, Director of Program Management at Microsoft, who gives us all the details and potential use-cases for the Azure DevOps Server in your organization.

An overview of Azure Blueprints | Azure Friday

Alex Frankel joins Scott Hanselman to discuss Azure Blueprints. Environment creation can be a long and error prone process. Azure Blueprints helps you deploy and update cloud environments in a repeatable manner using composable artifacts such as policies, role-based access control, and Azure Resource Manager templates.

Enhanced monitoring capabilities and tags/annotations in Azure Data Factory | Azure Friday

Gaurav Malhotra and Scott Hanselman explore tagging support and enhanced monitoring capabilities, including dashboards and improved debugging support in Azure Data Factory. Data integration is complex and the ability to monitor your data factory pipelines is a key requirement for dev ops personnel inside an enterprise. Now, you can tag/annotate your data factory pipelines to monitor all the pipeline executions with that particular tag. In addition, Data Factory visual tools provide dashboards to monitor your pipelines and an ability to monitor your pipeline execution by the Integration Runtime (IR) upon which the activities execute.

Logic Apps Connector to Ethereum Blockchain Networks | Block Talk

This episode provides an overview of how to use our serverless Ethereum Connector to transform smart contracts into an automated, visual workflow using the rich Azure Logic Apps Connectors ecosystem. We introduce the core concepts of Logic Apps and demonstrate a sample workflow triggered by a Solidity event, including how to read smart contract properties and write them to Azure Blob storage.

Azure IoT Device Agent for Windows | Internet of Things Show

Customers across industries, whether in an industrial setting or retail environment, are looking for ways to remotely provision and manage their IoT devices. Direct device access may not always be feasible when IoT devices are out in the field or on the factory floor. Microsoft Azure IoT Device Agent enables operators to configure, monitor and manage their devices remotely from their Azure dashboard. In this episode of the #IoTShow you will get an overview of Microsoft Azure IoT Device Agent with a demo.

Visual Studio for Mac: Publish to Azure | Visual Studio Toolbox

In this video, Cody Beyer will demonstrate how to log in and publish a web project to Azure. Join him and learn how to get the most out of Visual Studio for Mac by combining it with the power of Azure.

How to manage your Kubernetes clusters | Kubernetes Best Practices Series

Learn best practices on how to manage your Kubernetes clusters from field experts in this episode of the Kubernetes Best Practices Series. In this intermediate-level deep dive, you will learn about cluster management and multi-tenancy in Kubernetes.

Thumbnail from How to manage your Kubernetes clusters | Kubernetes BestPractices Series on YouTube

How to add the Azure Cloud Shell to Visual Studio Code | Azure Tips and Tricks

In this edition of Azure Tips and Tricks, learn how to add the Azure Cloud Shell to Visual Studio Code. To add the Azure Cloud Shell, make sure you have the “Azure Account” extension installed in Visual Studio Code.

Thumbnail from How to add the Azure Cloud Shell to Visual Studio Code | Azure Tips and Tricks on YouTube

Overview of VS Code Extensions for Azure Government

In this episode of the Azure Government video series, Steve Michelotti sits down with Yujin Hong, Program Manager on the Azure Government Engineering team, to discuss many of the incredible VS Code Extensions for Azure. VS Code has quickly become the most popular editor in the world and there are numerous reasons for this, but one of the key reasons is VS Code’s extensibility. There are numerous VS Code Extensions available for Azure, and now these same extensions can be utilized for Azure Government! In this demo-heavy video, Yujin will show the unified authentication experience that enable all these cool extensions to seamlessly authenticate to Azure Government. She then walks through several demos that show how easy these extensions make it for developers to work with Storage, App Service, Cosmos DB, and Azure Functions in Azure Government. If you’re a developer who works with Azure Government, this video is for you!

Thumbnail from Overview of VS Code Extensions for Azure Government on YouTube

Modern Data Warehouse overview | Azure SQL Data Warehouse

How do you think about building out your data pipeline in Azure? Discover how the Modern Data Warehouse solution pattern can modernize your data infrastructure in the cloud and enable new business scenarios. This is the first episode of an 8-part series on Azure SQL Data Warehouse.

Thumbnail from Modern Data Warehouse overview | Azure SQL Data Warehouse  on YouTube

Paul Stovell on Octopus Deploy - Episode 22 | The Azure DevOps Podcast

Paul Stovell, the founder and CEO of Octopus Deploy, joins the podcast today. Paul is an expert on all things automated deployment and Cloud operations. He started Octopus Deploy back in 2011, but prior to that, he worked as a consultant for about five years. Octopus Deploy is a pretty major player in the market. Their mission? To do automated deployments really, really well. Today, it helps over 20,000 customers automate their deployments, and employs 40 brilliant people. It can be integrated with Azure DevOps services and many other build services. On this week’s episode, Paul talks about his career journey and what led him to create Octopus Deploy; his accomplishments, goals, and visions for Octopus Deploy; which build servers integrate best with Octopus Deploy; his tips and tricks for how to best utilize it; and his vision for the future of DevOps.

Events

Cloud Commercial Communities webinar and podcast newsletter–February 2019

In this Cloud Commercial Communities monthly webinar and podcast update, get links to both the upcoming links for February and links to webinars and podcasts from January. Each month the team focuses on core programs, updates, trends, and technologies that Microsoft partners and customers need to know to increase success using Azure and Dynamics. While much of the content is available for on-demand consumption, attending live webinars enables you to participate in Q&A with the webinar hosts.

Customers, partners, and industries

Modernizing payment management for online merchants

Learn about Guru, which is Newgen's fully-integrated portal that enables merchants to have a complete view of their payments, generate reports, capture/void transactions, and perform refunds. Guru is a fully cloud-based solution hosted completely on Microsoft Azure. It is a fully-managed SaaS solution that comes as a value addition with Newgen's Payment Gateway—a cutting edge payment technology for merchants.

Illustration of the benefits of Newgen's Payment Gateway

Azure IoT drives next-wave innovation in infrastructure and energy

Last week at the DistribuTECH conference in New Orleans, Azure IoT partners showcased new solutions that bring the next level of “smart” to our grids. We invited eight partners to the Microsoft booth to demonstrate their approach to modernizing infrastructure, and how Azure IoT dramatically accelerates time to results. Learn how each partner showed new use cases for utilities, infrastructure, and cities that take advantage of cloud, AI, and IoT. With solutions that take full advantage of the intelligent cloud and intelligent edge, we continue to demonstrate how cloud, IoT, and AI have the power to drastically transform every industry. Smart grids will drive efficiencies to power and utility companies, grid operators, and energy prosumers.

Advancing tactical edge scenarios with Dell EMC Tactical Microsoft Azure Stack and Azure Data Box family

Microsoft, working with partners like Dell EMC, shared new capabilities that continue to deliver the power of the intelligent cloud and intelligent edge to government customers and their partners. Last year, we announced Azure Stack availability for Azure Government customers. With Azure Stack for Azure Government, agencies can efficiently modernize their on-premises legacy applications that are not ready or a fit for the public cloud due to cyber defense concerns, regulations, or other requirements. Data Box products help agencies to migrate large amounts of data, for example backup, archive or big data analytics, to Azure when they are limited by time, network availability, or costs.

Photo of Dell EMC Tactical Microsoft Azure Stack in partnership with Tracewell Systems

Investing in our partners’ success

While Microsoft has long been a partner-oriented organization, some things are different with cloud. Specifically, partners need Microsoft to be more than just a great technology provider, you need us to be a trusted business partner. This requires long-term commitment and the ability to continually adapt and innovate as the market shifts. This has been, and continues to be, our commitment. Our partnership philosophy is grounded in the foundation that we can only deliver on our mission if there is a strong and successful ecosystem around us. Julia White, Corporate Vice President, Microsoft Azure,  highlights our key partner-oriented investments and some of the resources to help our partners successfully grow their businesses.

Azure Marketplace new offers – Volume 31

The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. You can also connect with Gold and Silver Microsoft Cloud Competency partners to help your adoption of Azure. In the first half of January we published 67 new offers.

Colfax amplifies the power of its ESAB product portfolio with IoT

With the evolution of the Internet of Things (IoT), Colfax saw an opportunity to transform its businesses. What was unique about Colfax’s IoT initiative – named Data Driven Advantage (DDA) –  was their vision of enabling customers to leverage the extensive ESAB portfolio. They selected PTC Thingworx for Azure and the Microsoft Azure IoT platform. With ESAB Digital Solutions, customers now have data to understand how processes, labor, and material contribute to the cost of each part.


This Week in Azure - 8 February 2019 | A Cloud Guru - This Week in Azure

This time on Azure This Week, Lars talks about Lsv2-series VMs are now in general availability, there’s a new version of the Microsoft Threat Modelling Tool out, Azure trivia is back every Monday, and come meet the team from A Cloud Guru next week at the Ignite Tour in Sydney.

Thumbnail from This Week in Azure - 8 February 2019 | ACloud Guru - This Week in Azure on YouTube

Azure IoT Hub Java SDK officially supports Android Things platform

$
0
0

Connectivity is often the first challenge in the Internet of Things (IoT) world, that’s why more than three years ago we released Azure IoT SDKs. Azure IoT SDKs enable developers to build IoT applications that interact with IoT Hub and the IoT Hub Device Provisioning Service. The SDKs cover most popular languages in IoT development, including C, .NET, Java, Python, and Node.js, as well as popular platforms like Windows, Linux, OSX, and MBED. Since April 2018, we have added official support for iOS and Android to enable mobile IoT scenarios.

Today, we are happy to share that Azure IoT Hub Java SDK will officially support the Android Things platform. This announcement showcases our commitment to enable greater choice and flexibility in IoT deployments. Developers can leverage the benefits of the Android Things operating system on the device side, while using Azure IoT Hub as the central message hub that scales to millions of simultaneously connected devices.

All features in the Java SDK will be available on the Android Things platform, including Azure IoT Hub features we support and SDK-specific features such as retry policy for network reliability. In addition, the Android Things platform will be tested with every release. Our test suites include unit tests, integration tests, and end-to-end tests, all available on GitHub. We also publish the exact platform version we are testing on. 

Learn more about building applications for IoT Hub on Android Things by visiting the below resources:

Actuating mobility in the enterprise with new Azure Maps services and SDKs

$
0
0

The mobility space is at the forefront of the most complex challenges faced by cities and urban areas today. The movement of people and things is as much a driver of opportunity as it is an agent of chaos, aggravating existing challenges of traffic, pollution, and unbalanced livelihoods. Today, Azure Maps is continuing to expand the offerings of our platform, introducing a new set of capabilities in the form of SDKs and cloud-based services, to enable enterprises, partners, and cities to build the solutions that will help visualize, analyze, and optimize these mobility challenges.

The services we’re introducing are designed exclusively for the needs of the modern enterprise customer – powerful, real-time analytics and seamless cross-screen experiences, fortified by robust security services.

First, we’re officially moving the following services from public preview to general availability: Route Range {Isochrones}, Get Search Polygon, and Satellite and Hybrid Imagery. Furthermore, we’re introducing multiple new services. We’re enhancing our map canvas by introducing a stunning set of natural earth map tiles and an image compositor to make interaction with our maps more aesthetic, useful, and powerful. We’re also introducing Spatial Operations services that offer powerful analytics used by mobility applications and other industries today, as well as a new Android SDK and a Web SDK that equip Azure customers with the tools necessary to make smarter, faster, more informed decisions. And because privacy and security are top of mind, Azure Maps is now natively integrated with Azure Active Directory, making access to our services more secure while enabling roles and restrictions to our customers.

These services will provide Azure customers with the ability to offload map data processing and hosting costs all while getting the benefits of a rich set of maps and mapping services, securely with the fastest map data refresh rate available today. This refresh rate of data and services is bolstered by the recently announced partnership with TomTom who has committed to moving their map-making workloads to the Azure cloud significantly shortening the time from impetus to end user.

Android SDK (Preview)

While the Azure Maps Web SDK can work within a web control on mobile platforms, many developers prefer native support to interoperate with other native components and have these capabilities in native code… Java != JavaScript. In support of our customers who rely on applications running on Android, Azure Maps is distributing an Android SDK, complete with rendering maps and traffic, drawing, event handling, and using the variety of our map canvases. You can also connect to other Azure Maps services such as Search and Routing though the Azure Maps services APIs.

Azure Maps

Spatial Operations (Preview)

Data analysis is central to the Internet of Things (IoT). Azure Maps Spatial Operations will take location information and analyze it on the fly to help inform our customers of ongoing events happening in time and space, enabling near real-time analysis and predictive modeling of events. Today Spatial Operations includes the following services:

Geofencing. A geofence is an “invisible fence” around a particular area. These “fences” exist in coordinates in the shape of customizable polygons and can be associated with temporal constraints so that fences are evaluated only when relevant. Furthermore, you can store a geofence Azure Maps Data Service (more on that below). With Azure Event Grid integration, you can create notifications whenever the position of an object changes with respect to a geofence – including entry, exit, or changing proximity to a geofence.

This has multiple applications. In transportation and logistics, it can be used to create alerts when cargo ships have arrived at waypoints along routes – critical for creating delivery alerts as well as for anticipating incidents of piracy. For drones, a geofence could enforce the limitation of where a drone is permitted to fly.

Geofencing in Azure Maps

However, geofencing isn’t limited to transportation. In agriculture, it can enable real-time notifications when a herd has left a field. In construction, you can receive a notification when expensive equipment leaves a construction site that it shouldn’t be leaving or when it is parked in an area that may cause damage to equipment. It can also be used to warn site visitors when entering hazardous zones on sites, as implemented by Scottish IoT technology firm, Tagdat. In services, vending machines in college dorms often disappear. The servicing company can be notified if the machine leaves the premises. Geofencing is an amazingly powerful service to provide notifications and analytics when objects are moving. And, even when objects aren’t moving and should be!

Another example of a customer that is using this service is GovQA who state, “We use geofencing to identify when a requester is trying to submit a request which is outside the customer's pre-defined limits. The limits are defined by the city or county during the set up of the system. This allows the city to correctly handle the request and communicate with the requester appropriately using rules and configurations within GovQA."

Buffer. Build an area around points, lines, and polygons based on a given distance. Define the area in proximity of powerlines that should be kept clear from vegetation or create route buffers in fleet management for managing route deviations.

Closest Point. Returns the closest points between a base point and a given collection of points. It can be used to quickly identify the closest stores, charging stations, or in mobility scenarios, it could be used to identify the closest devices.

Closest points in Azure Maps

Great-circle distance. Returns the great-circle or shortest distance between two points on the surface of a sphere. In the context of drone delivery services, this API can be used to calculate the distance between an origin and destination as the crow flies taking into account the curvature of the earth so that a accurate time estimate for the delivery can be taken into account to optimize operations.

Point in Polygon. Returns a Boolean indicating whether the location is inside a given set of Polygon and MultiPolygon geometries. For example, the point in polygon API can be used to determine whether a home for sale is in the preferred area of customers.

Data Service (Preview)

Data is an imperative for maps, and bringing customer data closer to the Azure Maps service will reduce latency, increase productivity, and create powerful, new scenarios to light up in your applications. As such, Azure Maps will now have the ability for customers to upload and store up to 50MB of geospatial data for use with other Azure Maps services, such as geofencing or image composition. 

Azure Active Directory Integration (Preview)

Security and role-based access control have been of paramount concern for the modern enterprise. As such, we’re proud to announce that Azure Active Directory (AD) is now a core capability of Azure Maps. Use Azure AD to protect your customers’ information and implement secure access by providing role-based access control (RBAC).  Whether you have public applications or applications requiring a login, Azure AD and Azure Maps will support your security needs by authenticating your applications and Azure AD user(s). Additionally, this Azure AD implementation supports managed identities for Azure resources which provide Azure services (Azure App service, Azure Functions, Virtual Machines, etc.) with an automatically managed identity that can be authorized for access to Azure Maps services. 

Azure Maps Web SDK 2.0

Today we’re announcing a new module for accessing Azure Maps services to use in conjunction with the Azure Map Control. The new Service Module allows you to natively work directly with the Azure Maps services. This new module, plus the aforementioned adoption of Azure Active Directory, warranted the need for creating a new version and encapsulating them into a single Web SDK. Henceforth, we’ll containerize our services for web developers into the Azure Maps Web SDK 2.0. Note that the Azure Map Control 1.x will continue to be operational. However, we will innovate on 2.0 moving forward. The upgrade path for 1.x to 2.0 is quite simple by changing the version number! As a part of the new Azure Maps Web SDK 2.0 we’re also including some new client features:

Azure Active Directory (AAD). Azure Maps now natively supports Azure Active Directory to keep your access to Azure Maps secure. With native AAD integration, ensure your access is protected when your applications call Azure Maps.

Services Module. The new Services Module adds support for AAD and a much cleaner API interface for accessing Azure Maps services. It works with both the Web SDK and also in NodeJS. Being part of the Azure family of products, the Azure Maps Services Module was designed to align with an initiative to unify Azure SDKs and was required in order to add support for AAD.

Stroke gradients. There are times in location services development when you’d want to have gradient colors throughout the stroke of a line. Azure Maps Web SDK 2.0 now natively supports the ability to fill a line with a gradient of colors to show transition from one segment of a line to the next. As an example, these gradient lines can represent changes over time and distance, or different temperatures across a connected line of objects.

Line Stroke Gradient in Azure Maps

Shaded Relief map style. Below you’ll read about the new Shaded Relief Map Style. This beautiful, new map style is available immediately in the Azure Maps Web SDK 2.0.

Polygon Fill Patterns. Representations of polygons on a map can be done in a plethora of ways. In many scenarios, there will be a need to create polygons on the map. With the Azure Maps Web SDK 2.0 there is native control for shapes, borders, and fills. Now, we natively support patterns as a fill in addition to a single color fill. Patterns provide a unique way to highlight a specific area to really make it standout, especially if that area is surrounded by other color-shaded polygons. As an example, patterns can be used to show an area in transition, an area significantly different from other areas (such as financially, population, or land use), or areas highlighting facets of mobility such as no-fly zones or areas where permits are required.

Polygon Fill Patterns in Azure Maps

Shaded Relief map style

Azure Maps comes complete with a few map styles including the road, dark gray, night, and satellite/hybrid. We’re adding a new map style – Shaded Relief – to complement the existing map styles. Shaded Relief is just that – an elegantly designed map canvas complete with the contours of the Earth. The Azure Maps Web SDK comes complete with the Shaded Relief canvas and all functions work seamlessly atop of it.

Shaded Relief map style in Azure Maps

Image composition

Azure Maps is introducing a new image compositor that allows customers to render raster map images annotated with points, lines, and polygons. In many circumstances you can submit your request along with your respective point data to render a map image. For more complex implementations, you’ll want to use the map image compositor in conjunction with data stored in the aforementioned Azure Maps Data Service.

Image composition in Azure Maps

We always appreciate feedback from our customers. Feel free to comment below or post questions to Stack Overflow or our Azure Maps Feedback Forums.

Controlling costs in Azure Data Explorer using down-sampling and aggregation

$
0
0

Azure Data Explorer (ADX) is an outstanding service for continuous ingestion and storage of high velocity telemetry data from cloud services and IoT devices. Leveraging its first-rate performance for querying billions of records, the telemetry data can be further analyzed for various insights such as monitoring service health, production processes, and usage trends. Depending on data velocity and retention policy, data size can rapidly scale to petabytes of data and increase the costs associated with data storage. A common solution for storage of large datasets for a long period of time is to store the data with differing resolution. The most recent data is stored at maximum resolution, meaning all events are stored in raw format. While the historic data is stored at reduced resolution, being filtered and/or aggregated. This solution is often used for time series databases to control hot storage costs.

In this blog, I’ll use the GitHub events public dataset as the playground. For more information read about how to stream GitHub events into your own ADX cluster by reading the blog, “Exploring GitHub events with Azure Data Explorer.” I’ll describe how ADX users can take advantage of stored functions, the “.set-or-append” command, and the Microsoft Flow Azure Kusto connector. This will help you to create and update tables with filtered, down-sampled, and aggregated data for controlling storage costs. The following are steps which I performed.

Create a function for down-sampling and aggregation

The ADX demo11 cluster contains a database named GitHub. Since 2016, all events from GHArchive have been ingested into the GitHubEvent table and now total more than 1 billion records. Each GitHub event is represented in a single record with event-related information on the repository, author, comments, and more.

Screenshot of Azure Data Explorer demo11 and GitHub database

Initially, I created the stored function AggregateReposWeeklyActivity which counts the total number of events in every repository for a given week.

.create-or-alter function with (folder = "TimeSeries", docstring = "Aggregate Weekly Repos Activity”)
AggregateReposWeeklyActivity(StartTime:datetime)
{
     let PeriodStart = startofweek(StartTime);
     let Period = 7d;
     GithubEvent
     | where CreatedAt between(PeriodStart .. Period)
     | summarize EventCount=count() by RepoName = tostring(Repo.name), StartDate=startofweek(CreatedAt)
     | extend EndDate=endofweek(StartDate)
     | project StartDate, EndDate, RepoName, EventCount
}

I can now use this function to generate a down-sampled dataset of the weekly repository activity. For example, using the AggregateReposWeeklyActivity function for the first week of 2017 results in a dataset of 867,115 records.

Screenshot of AggregateReposWeeklyActivity function yielding dataset results

Using Kusto query, create a table with historic data

Since the original dataset starts in 2016, I formulated a program that creates a table named ReposWeeklyActivity and backfills it with weekly aggregated data from the GitHubEvent table. The query runs in parallel ingestion of weekly aggregated datasets using the “.set-or-append” command. The first ingestion operation also creates the table that holds the aggregated data.

.show table GithubEvent details
| project TableName, SizeOnDiskGB=TotalExtentSize/pow(1024,3), TotalRowCount

.show table ReposWeeklyActivity details
| project TableName, SizeOnDiskGB=TotalExtentSize/pow(1024,3), TotalRowCount

Code sample:
using Kusto.Data.Common;
using Kusto.Data.Net.Client;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace GitHubProcessing
{
     class Program
     {
         static void Main(string[] args)
         {
             var clusterUrl = "https://demo11.westus.kusto.windows.net:443;Initial Catalog=GitHub;Fed=True";
             using (var queryProvider = KustoClientFactory.CreateCslAdminProvider(clusterUrl))
             {
                 Parallel.For(
                     0,
                     137,
                     new ParallelOptions() { MaxDegreeOfParallelism = 8 },
                     (i) =>
                     {
                         var startDate = new DateTime(2016, 01, 03, 0, 0, 0, 0, DateTimeKind.Utc) + TimeSpan.FromDays(7 * i);
                         var startDateAsCsl = CslDateTimeLiteral.AsCslString(startDate);
                         var command = $@"
                         .set-or-append ReposWeeklyActivity <|
                         AggregateReposWeeklyActivity({startDateAsCsl})";
                         queryProvider.ExecuteControlCommand(command);

                        Console.WriteLine($"Finished: start={startDate.ToUniversalTime()}");
                     });
             }
         }
     }
}

Once the backfill is complete, the ReposWeeklyActivity table will contain 153 million records.

Screenshot of the ReposWeeklyActivity table yielding 153 million records

Configure weekly aggregation jobs using Microsoft Flow and Azure Kusto connector

Once the ReposWeeklyActivity table is created and filled with the historic data, we want to make sure it stays updated with new data appended every week. For that purpose, I created a flow in Microsoft Flow that leverages Azure Kusto connector to ingest aggregation data on a weekly basis. The flow is built of two simple steps:

  1. Weekly trigger of Microsoft Flow.
  2. Use of “.set-or-append” to ingest the aggregated data from the past week.

image

For additional information on using Microsoft Flow with Azure Data Explorer see the Azure Kusto Flow connector.

Start saving

To depict the cost saving potential of down-sampling, I’ve used “.show table <table name> details” command to compare the size of the original GitHubEvent table and the down-sampled table ReposWeeklyActivity.

.show table GithubEvent details
| project TableName, SizeOnDiskGB=TotalExtentSize/pow(1024,3), TotalRowCount

.show table ReposWeeklyActivity details
| project TableName, SizeOnDiskGB=TotalExtentSize/pow(1024,3), TotalRowCount

The results, summarized in the table below, show that for the same time frame the down-sampled data is approximately 10 times smaller in record count and approximately 180 times smaller in storage size.

 

Original data

Down-sampled/aggregated data

Time span

2016-01-01 … 2018-09-26

2016-01-01 … 2018-09-26

Record count

1,048,961,967

153,234,107

Total size on disk (indexed and compressed)

725.2 GB

4.38 GB

Converting the cost savings potential to real savings can be performed in various ways. A combination of the different methods are usually most efficient in controlling costs.

  • Control cluster size and hot storage costs: Set different caching policies for the original data table and down-sampled table. For example, 30 days caching for the original data and two years for the down-sampled table. This configuration allows you to enjoy ADX first-rate performance for interactive exploration of raw data, and analyze activity trends over years. All while controlling cluster size and hot storage costs.
  • Control cold storage costs: Set different retention policies for the original data table and down-sampled table. For example, 30 days retention for the original data and two years for the down-sampled table. This configuration allows you to explore the raw data and analyze activity trends over years while controlling cold storage costs. On a different note, this configuration is also common for meeting privacy requirements as the raw data might contain user-identifiable information and the aggregated data is usually anonymous.
  • Use the down-sampled table for analysis: Running queries on the down-sampled table for time series trend analysis will consume less CPU and memory resources. In the example below, I compare the resource consumption of a typical query that calculates the total weekly activity across all repositories. The query statistics shows that analyzing weekly activity trends on the down-sampled dataset is approximately 17 times more efficient in CPU consumption and approximately eight times more efficient in memory consumption.

Running this query on the original GitHubEvent table consumes approximately 56 seconds of total CPU time and 176MB of memory.

Screenshot of a command comparing GitHubEvent and ReposWeeklyActivity table sizes

The same calculation on the aggregated ReposWeeklyActivity table consumes only about three seconds of total CPU time and 16MB of memory.

Screenshot showing CPU time and MB of memory being used by demo11 query

Next steps

Azure Data Explorer leverages cloud elasticity to scale out to petabyte-size data, depict exceptional performance, and handle high query workloads. In this blog, I’ve described how to implement down-sampling and aggregation to control the costs associated with large datasets.

To find out more about Azure Data Explorer you can:

Azure IoT Edge runtime available for Ubuntu virtual machines

$
0
0

Azure IoT Edge is a fully managed service that allows you to deploy Azure and third-party services—edge modules—to run directly on IoT devices, whether they are cloud-connected or offline. These edge modules are container-based and offer functionality ranging from connectivity to analytics to storage—allowing you to deploy modules entirely from the Azure portal without writing any code. You can browse existing edge modules in the Azure Marketplace.

Today, we’re excited to offer the open-source Azure IoT Edge runtime preinstalled on Ubuntu virtual machines to make it even easier to get started, simulate an edge device, and scale out your automated testing.

Why use virtual machines?

Azure IoT Edge deployments are built to scale so that you can deploy globally to any number of devices and simulate the workload with virtual devices which is an important step to verify if your solution is ready for mass deployment. The easiest way to do this is by creating simulated devices with Azure virtual machines (VMs) running the Azure IoT Edge runtime to scale your testing from the earliest stages of development—even before you have production hardware.

Azure VMs are:
•    Scalable/automatable: deploy as many as you need
•    Persistent: cloud- managed, rather than locally
•    Flexible: any operating system and elastic resources
•    Easy to use: deploy with simple command line instructions or template

Azure IoT Edge on Ubuntu VM

On first boot, the Azure IoT Edge on Ubuntu VM preinstalls the latest version of the Azure IoT Edge runtime, so you will always have the newest features and fixes. It also includes a script to set the connection string and then restarts the runtime, which can be triggered remotely through the Azure VM portal or Azure command line, allowing you to easily configure and connect the IoT Edge device without starting a secure shell (SSH) or remote desktop session. This script will wait to set the connection string until the IoT Edge client is fully installed so that you don’t have to build that into your automation.

The initial offering is based on Ubuntu Server 16.04 LTS, but other operating systems and versions will be added based on user feedback. We’d love to hear your thoughts in the comments.

Getting started

You can deploy the Azure IoT Edge on Ubuntu VM through the Azure Marketplace, Azure Portal, or Azure Command-line. Let me show you how to use the Azure Marketplace and the Portal.

Azure Marketplace

The quickest way to set up a single instance is to use the Azure Marketplace:

1.  Navigate to the Marketplace with our short link or by searching “Azure IoT Edge on Ubuntu” on the Azure Marketplace

2.  Select “GET IT NOW” and then “Continue” on the next dialog.

3.  Once in the Azure Portal, select “Create” and follow the wizard to deploy the VM.

  1. If it’s your first time trying out a VM, it’s easiest to use a password and enable the SSH in the public inbound port menu.
  2. If you have a resource intensive workload, you should upgrade the virtual machine size by adding more CPUs and/or memory.

4.  Once the virtual machine is deployed, configure it to connect to your IoT Hub by:

  1. Copying your device connection string from your IoT Edge device created in your IoT Hub (You can follow the “Register a new Azure IoT Edge device from the Azure portal” how-to guide if you aren’t familiar with this process)
  2. Selecting your newly created virtual machine resource from the Azure Portal and opening the “run command” option

Greg-1

  1. Select the “RunShellScript” option

Greg-2

  1. Execute the script below via the command window with your device connection string:/etc/iotedge/configedge.sh “{device_connection_string}”
  2. Select “Run”
  3. Wait a few moments, and the screen should then provide a success message indicating the connection string was set successfully.

Greg-3

5.  Voila! Your IoT Edge on Ubuntu VM is now connected to IoT Hub

Azure portal

If you’re already working in the Azure portal, you can search for “Azure IoT Edge” and select “Ubuntu Server 16.04 LTS + Azure IoT Edge runtime” to begin the VM creation workflow. From there, complete steps 3-5 in the Marketplace instructions above.

Greg-4

 

If you’d like to learn how you can deploy these virtual machines at scale, check out the “Deploy from Azure CLI” section in the Run Azure IoT Edge on Ubuntu Virtual Machines article.

Now that you have created an IoT Edge device with your virtual machine and connected it to your IoT Hub, you can deploy modules to it like any other IoT Edge device. For example, if you go to the IoT Edge Module Marketplace and select the “Simulated Temperature Sensor,” you can deploy this module to the new device and see data flowing in just a few clicks! Next, try deploying your own workloads to the virtual machine and let us know how we can further simplify your IoT Edge testing experience in the comments section below or on user voice

Get started with Azure IoT Edge on Ubuntu virtual machines today!

Azure Stack IaaS – part one

$
0
0

This blog post was co-authored by Daniel Savage, Principal Program Manager, Azure Stack and Tiberiu Radu, Senior Program Manager, Azure Stack.

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform

When we discuss Azure Stack with our customers, they see the value in Azure Stack providing cloud-native capabilities to their datacenters. They see the opportunity to modernize their apps and address the unique solutions Azure Stack can deliver, but they often pause as they ponder where to begin. They wonder how to get value from the investments they have in apps currently running on virtual machines (VM). They wonder, “Does Azure Stack help me here? What if I am not quite ready for Platform-as-a-Service?” These questions are difficult, but the answers become more clear when they understand that Azure Stack at its core is an IaaS platform.

Azure Stack allows customers to run their own instance of Azure in their datacenter. Organizations pick Azure Stack as part of their cloud strategy because it helps them handle situations when the public cloud won’t work for them. The three most common reasons use Azure Stack are because of poor network connectivity to the public cloud, regulatory or contractual requirements, or backend systems that cannot be exposed to the Internet.

Azure Stack has created a lot of excitement around new hybrid application patterns, consistent Azure APIs to simplify DevOps practices and processes, the extensive Azure ecosystem available through the Marketplace, and the option to run Azure PaaS Services locally, such as App Services and IoT Hub. Underlying all of these are some exciting IaaS capabilities and we are so exciting to be kicking off a new blog series to show it off. 

Welcome to the Azure Stack IaaS blog series!

To learn more, please see the below resources:

IaaS is more than virtual machines

People often think of IaaS as simply virtual machines, but IaaS is more. When you deploy a VM in Azure or Azure Stack, the machine comes with a software defined network including DNS, public IPs, firewall rules (also called network security groups), and many other capabilities. The VM deployment also creates disks for your VMs on software defined storage running in Blob Storage. In the Azure Stack portal image, you can see how this full software defined infrastructure is displayed after you have deployed a VM:

Full software defined infrastructure in the Azure Stack portal

To learn more, please see below for product overviews:

IaaS is the foundation for PaaS Services

Did you know that the Azure PaaS services are powered by IaaS VMs behind the scenes? As a user you don’t see these VMs, but they deliver the capabilities like Event Hubs or Azure Kubernetes Service (AKS). This same Azure IaaS is the foundation of PaaS in Azure Stack. Not only can you use it to deliver your applications, Azure PaaS services will use IaaS VMs to deliver solutions on Azure Stack.

Take Event Hubs, currently in private preview, as an example. An Azure Stack administrator downloads the Event Hubs resource provider from the Marketplace and installs it. Installation creates a new admin subscription and a set of IaaS resources. The administrator sees things like virtual networks, DNS zones, and virtual machine scale sets in the administration portal:

Microsoft Azure Stack Administration portal

However, when one of your developers deploys their Event Hub in Azure Stack, they don’t see the behind-the-scenes IaaS VMs and resources in their subscription, they just see the Event Hub:

Deploys EventHub in Azure Stack

Modernize your apps through operations

Often people think that application modernization involves writing or changing application code, or that modernization means rearchitecting the entire application. In most cases, the journey starts with small steps. When you run your VMs in Azure or Azure Stack, you can modernize your operations.

In addition to the underlying infrastructure, Azure and Azure Stack offers a full set of integrated and intelligent services. These services support the management for your VMs, self-service capabilities, enhance deployment, and enable infrastructure-as-code. With Azure Stack, you empower your teams. 

Over the next couple of blog posts we will go into more detail about these areas. Here is a chart of the cloud capabilities you can utilize to modernize your IaaS VM operations:

Chart of cloud capabilities to modernize your IaaS VM operations

What’s next in this blog series

We hope you come back to read future posts in this blog series. Here are some of our planned upcoming topics:

  • Fundamentals of IaaS
  • Start with what you already have
  • Do it yourself
  • Pay for what you use
  • It takes a team
  • If you do it often, automate it
  • Protect your stuff
  • Build on the success of others
  • Journey to PaaS

PyTorch on Azure: Deep learning in the oil and gas industry

$
0
0

This blog post was co-authored by Jürgen Weichenberger, Chief Data Scientist, Accenture and Mathew Salvaris, Senior Data Scientist, Microsoft

Drilling for oil and gas is one of the most dangerous jobs on Earth. Workers are exposed to the risk of events ranging from small equipment malfunctions to entire off shore rigs catching on fire. Fortunately, the application of deep learning in predictive asset maintenance can help prevent natural and human made catastrophes.

image

We have more information than ever on our equipment thanks to sensors and IoT devices, but we are still working on ways to process the data so it is valuable for preventing these catastrophic events. That’s where deep learning comes in. Data from multiple sources can be used to train a predictive model that helps oil and gas companies predict imminent disasters, enabling them to follow a proactive approach.

Using the PyTorch deep learning framework on Microsoft Azure, Accenture helped a major oil and gas company implement such a predictive asset maintenance solution. This solution will go a long way in protecting their staff and the environment.

What is predictive asset maintenance?

Predictive asset maintenance is a core element of the digital transformation of chemical plants. It is enabled by an abundance of cost-effective sensors, increased data processing, automation capabilities, and advances in predictive analytics. It involves converting information from both real-time and historical data into simple, accessible, and actionable insights. This is in order to enable the early detection and elimination of defects that would otherwise lead to malfunction. For example, by simply detecting an early defect in a seal that connects the pipes, we can prevent a potential failure that can result in a catastrophic collapse of the whole gas turbine.

Under the hood, predictive asset maintenance combines condition-based monitoring technologies, statistical process control, and equipment performance analysis to enable data from disparate sources across the plant to be visualized clearly and intuitively. This allows operations and equipment to be better monitored, processes to be optimized, better controlled, and energy management to be improved.

It is worth noting that the predictive analytics at the heart of this process do not tell the plant operators what will happen in the future with complete certainty. Instead, they forecast what is likely to happen in the future with an acceptable level of reliability. It can also provide “what-if” scenarios and an assessment of risks and opportunities.

Asset maintenance maturity matrix

Figure 1 – Asset maintenance maturity matrix (Source: Accenture)

The challenge with oil and gas

Event prediction is one of the key elements in predictive asset maintenance. For most prediction problems there are enough examples of each pattern to create a model to identify them. Unfortunately, in certain industries like oil and gas where everything is geared towards avoiding failure, the sought-after examples of failure patterns are rare. This means that most standard modelling approaches either perform no better than experienced humans or fail to work at all.

Accenture’s solution with PyTorch and Azure

Although there only exists a small number of failure examples, there exists a wealth of times series and inspection data that can be leveraged.

Approach for Predictive Maintenace

Figure 2 – Approach for Predictive Maintenance (Source : Accenture)

After preparing the data in stage one, a two-phase deep learning solution was built with PyTorch in stage two. First, a recurrent neural network (RNN) was trained in combination with a long short-term memory (LSTM) architecture which is phase one of stage two. The neural network architecture used in the solution was inspired by Koprinkova-Hristova et al 2011 and Aydin and Guldamlasioglu 2017. This RNN timeseries model forecasts important variables, such as the temperature of an important seal. These forecasts are then fed into a classifier algorithm (random forest) to identify the variable is outside of the safe range and if so, the algorithm produces a ranking of potential causes which experts can examine and address. This effectively enables experts to address the root causes of potential disasters before they occur.

The following is a diagram of the system that was used for training and execution of the solution:  

System Architecture

Figure 3 - System Architecture

The architecture above was chosen to ensure the customer requirement of maximum flexibility in modeling, training, and in the execution of complex machine learning workflows are using Microsoft Azure. At the time of implementation, the services that fit these requirements were HDInsights and Data Science Virtual Machines (DSVM). If the project was implemented today, Azure Machine Learning service would have been used for training/inferencing with HDInsights or Azure Databricks for data processing.

PyTorch was used due to the extreme flexibility in designing the computational execution graphs, and not being bound into a static computation execution graph like in other deep learning frameworks. Another important benefit of PyTorch is that standard python control flow can be used and models can be different for every sample. For example, tree-shaped RNNs can be created without much effort. PyTorch also enables the use of Python debugging tools, so programs can be stopped at any point for inspection of variables, gradients, and more. This flexibility was very beneficial during training and tuning cycles.

The optimized PyTorch solution resulted in faster training time by over 20 percent compared to other deep learning frameworks along with 12 percent faster inferencing. These improvements were crucial in the time critical environment that team was working in. Please note, that the version tested was PyTorch 0.3.

Overview of benefits of using PyTorch in this project:

  • Training time
    • Reduction in average training time by 22 percent using PyTorch on the outlined Azure architecture.
  • Debugging/bug fixing
    • The dynamic computational execution graph in combination with Python standard features reduced the overall development time by 10 percent.
  • Visualization
    • The direct integration into Power BI enabled a high end-user acceptance from day one.
  • Experience using distributed training
    • The dynamic computational execution graph in combination with flow control allowed us to create a simple distributed training model and gain significant improvements in overall training time.

How did Accenture operationalize the final model?

Scalability and operationalization were key design considerations from day one of the project, as the customer wanted to scale out the prototype to several other assets across the fleet. As a result, all components within the system architecture were chosen with those as criteria. In addition, the customer wanted to have the ability to add more data sources using Azure Data Factory. Azure Machine Learning service and its model management capability were used to operationalize the final model. The following diagram illustrates the deployment workflow used.

Deployment workflow

Figure 4 – Deployment workflow

The deployment model was also integrated into a Continuous Integration/Continuous Delivery (CI/CD) workflow as depicted below.

CI CD workflow

Figure 5 – CI/CD workflow

PyTorch on Azure: Better together

The combination of Azure AI offerings with the capabilities of PyTorch proved to be a very efficient way to train and rapidly iterate on the deep learning architectures used for the project. These choices yielded a significant reduction in training time and increased productivity for data scientists.

Azure is committed to bringing enterprise-grade AI advances to developers using any language, any framework, and any development tool. Customers can easily integrate Azure AI offerings into any part of their machine learning lifecycles to productionize their projects at scale, without getting locked into any one tool or platform.

How to avoid overstocks and understocks with better demand forecasting

$
0
0

Promotional planning and demand forecasting are incredibly complex processes. Take something seemingly straight-forward, like planning the weekly flyer, and there are thousands of questions involving a multitude of teams just to decide what products to promote, and where to position the inventory to maximize sell-through. For example:

  • What products do I promote?
  • How do I feature these items in a store? (Planogram: end cap, shelf talkers, signage etc.)
  • What pricing mechanic do I use? (% off, BOGO, multi-buy, $ off, loyalty offer, basket offer)
  • How do the products I'm promoting contribute to my overall sales plan?
  • How do the products I'm promoting interact with each other? (halo and cannibalization)
  • I have 5,000 stores, how much inventory of each promoted item should I stock at each store?

If the planning is not successful, the repercussions can hurt a business:

  • Stockouts directly result in lost revenue opportunities, through lost product sales. This could be a result of customers who simply purchase the desired item from another retailer—or a different brand of the item.
  • Overstock results in costly markdowns and shrinkage (spoilage) that impacts margin. The opportunity cost of holding non-productive inventory in-store also hurts the merchant. And if inventory freshness is a top priority, poor store allocation can impact brand or customer experience.
  • Since retailers invest margin to promote products, inefficient promotion planning can be a costly exercise. It’s vital to promote items that drive the intended lift.

Solution

Rubikloud’s Price & Promotion Manager allows merchants and supply chain professionals to take a holistic approach to integrated forecasting and replenishment. The product has three core modules detailed below.

image

The three modules are:

  1. Learn module: Leverages machine learning to understand how internal and external factors impact demand at a store-sku level, as well as a recommendation framework to improve future planning activities.
  2. Activate module: Allows non-technical users to harness the power of machine learning to better forecast demand and seamlessly integrate forecasts into the supply chain process.
  3. Optimize module: Simulates expected outcomes by changing various demand-driving levers such as promo mechanics, store placement, flyer, halo and cannibalization. The module can quickly reload past campaigns to automate forecast and allocation processes.

In addition, AI automates decision-making across the forecasting lifecycle. The retail-centric approach to forecasting applies novel solutions to more accurately forecast demand. For example, to address new SKUs, the solution uses a new mapping approach to address data scarcity and improve forecast accuracy.

The Price and Promotion Manager solution is built on a cloud-native, SaaS data platform designed to handle enterprise data workloads, covering all aspects of the data journey from ingestion, validation, to transformation into a proprietary data model. Users can seamlessly integrate solution outputs into their supply chain processes. The product design recognizes the challenges faced by category managers and enables a more efficient planning process (for example, a quick view to YoY comp promotions).

Benefits

  • Addresses data sparsity introduced by new product development and infrequently purchased items to better predict demand through new SKU mapping.
  • Translates stacked promotions and various promotion mechanics to an effective price, to better model impact on-demand.
  • Uses hierarchical models to improve forecast accuracy.

Azure Services

Rubikloud’s solution uses the following Azure services

  • HDInsight: allows Rubikloud to work faster and to have full confidence that they are taking advantage of every possible optimization.
  • Cosmos DB: provides the convenience of an always-on, reliable, and accessible key/value store. Also provides a reliable database service.
  • Blob Storage: easy to use and integrates well with HDInsight.
  • Azure Kubernetes Service (AKS): uses the power of Kubernetes orchestration for all Azure VM customers.

Recommended next steps

Explore how Price & Promotion Manager enables AI powered price and promotion optimization for enterprise retail.

Benefits of using Azure API Management with microservices

$
0
0

The IT industry is experiencing a shift from monolithic applications to microservices-based architectures. The benefits of this new approach include:

  • Independent development and freedom to choose technology – Developers can work on different microservices at the same time and choose the best technologies for the problem they are solving.
  • Independent deployment and release cycle – Microservices can be updated individually on their own schedule.
  • Granular scaling – Individual microservices can scale independently, reducing the overall cost and increasing reliability.
  • Simplicity – Smaller services are easier to understand which expedites development, testing, debugging, and launching a product.
  • Fault isolation – Failure of a microservice does not have to translate into failure of other services.

In this blog post we will explore:

    1. How to design a simplified online store system to realize the above benefits.
    2. Why and how to manage public facing APIs in microservice-based architectures.
    3. How to get started with Azure API Management and microservices.

      Example: Online store implemented with microservices

      Let’s consider a simplified online store system. A visitor of the website needs to be able to see product’s details, place an order, review a placed order.

      Whenever an order is placed, the system needs to process the order details and issue a shipping request. Based on user scenarios and business requirements, the system must have the following properties:

      • Granular scaling – Viewing product details happens on average at least 1,000 times more often than placing an order.
      • Simplicity – Independent user actions are clearly defined, and this separation needs to be reflected in the architecture of the system.
      • Fault isolation – Failure of the shipping functionality cannot affect viewing products or placing an order.

      They hint towards implementing the system with three microservices:

      • Order with public GET and POST API – Responsible for viewing and placing an order.
      • Product with public GET API – Responsible for viewing details of a product.
      • Shipping triggered internally by an event – Responsible for processing and shipping an order.

      For this purpose we will use Azure Functions, which are easy to implement and manage. Their event-driven nature means that they are executed on, and billed for, an interaction. This becomes useful when the store traffic is unpredictable. The underlying infrastructure scales down to zero in times of no traffic. It can also serve bursts of traffic in a scenario when a marketing campaign becomes viral or load increases during shopping holidays like Black Friday in the United States.

      To maintain the scaling granularity, ensure simplicity, and keep release cycles independent, every microservice should be implemented in an individual Function App.

      Flowchart of microservice being implemented in an indivudal function app

      The order and product microservices are external facing functions with an HTTP Trigger. The shipping microservice is triggered indirectly by the order microservice, which creates a message in Azure Service Bus. For example, when you order an item, the website issues a POST Order API call which executes the order function. Next, your order is queued as a message in an Azure Service Bus instance which then triggers the shipping function for its processing.

      Top reasons to manage external API communication in microservices-based architectures

      The proposed architecture has a fundamental problem, the way communication from outside is handled.

      • Client applications are coupled to internal microservices. This becomes especially burdensome when you wish to split, merge, or rewrite microservices.
      • APIs are not surfaced under the same domain or IP address.
      • Common API rules cannot be easily applied across microservices.
      • Managing API changes and introducing new versions is difficult.

      Although Azure Functions Proxies offer a unified API plane, they fall short in the other scenarios. The limitations should be addressed by fronting Azure Functions with an Azure API Management, now available in a serverless Consumption tier.

      Flowchart showing the fronting of Azure API Managemnet to Azure Functions

      API Management abstracts APIs from their implementation and hosts them under the same domain or a static IP address. It allows you to decouple client applications from internal microservices. All your APIs in Azure API Management share a hostname and a static IP address. You may also assign custom domains.

      Using API Management secures APIs by aggregating them in Azure API Management, and not exposing your microservices directly. This helps you reduce the surface area for a potential attack. You can authenticate API requests using a subscription key, JWT token, client certificate, or custom headers. Traffic may be filtered down only to trusted IP addresses.

      With API Management can also execute rules on APIs. You can define API policies on incoming requests and outgoing responses globally, per API, or per API operation. There are almost 50 policies like authentication methods, throttling, caching, and transformations. Learn more by visiting our documentation, “API Management policies.”

      API Management simplifies changing APIs. You can manage your APIs throughout their full lifecycle from design phase, to introducing new versions or revisions. Contrary to revisions, versions are expected to contain breaking changes such as removal of API operations or changes to authentication.

      You can also monitor APIs when using API Management. You can see usage metrics in your Azure API Management instance. You may log API calls in Azure Application Insights to create charts, monitor live traffic, and simplify debugging.

      API Management makes it easy to publish APIs to external developers. Azure API Management comes with a developer portal which is an automatically generated, fully customizable website where visitors can discover APIs, learn how to use them, try them out interactively, download their OpenAPI specification, and finally sign up to acquire API keys.

      How to use API Management with microservices

      Azure API Management has recently become available in a new pricing tier. With its billing per execution, the consumption tier is especially suited for microservice-based architectures and event-driven systems. For example, it would be a great choice for our hypothetical online store.

      For more advanced systems, other tiers of API Management offer a richer feature set.

      Regardless of the selected service tier, you can easily front your Azure Functions with an Azure API Management instance. It takes only a few minutes to get started with Azure API Management.

      Maximize throughput with repartitioning in Azure Stream Analytics

      $
      0
      0

      Customers love Azure Stream Analytics for its ease of analyzing streams of data in movement, with the ability to set up a running pipeline within five minutes. Optimizing throughput has always been a challenge when trying to achieve high performance in a scenario that can't be fully parallelized. This occurs when you don't control the partition key of the input stream, or your source “sprays” input across multiple partitions that later need to be merged. You can now use a new extension of Azure Stream Analytics SQL to specify the number of partitions of a stream when reshuffling the data. This new capability unlocks performance and aids in maximizing throughput in such scenarios.

      The new extension of Azure Stream Analytics SQL includes a keyword INTO that allows you to specify the number of partitions for a stream when performing reshuffling using a PARTITION BY statement. This new keyword, and the functionality it provides, is a key feature to achieve high performance throughput for the above scenarios, as well as to better control the data streams after a shuffle. To learn more about what’s new in Azure Stream Analytics, please see, “Eight new features in Azure Stream Analytics.”

      What is repartitioning?

      Repartitioning, or reshuffling, is required when processing data on a stream that is not sharded according to the natural input scheme, such as the PartitionId in the Event Hubs case. This might happen when you don’t control the routing of the event generators or you need to scale out your flow due to resource constraints. After repartitioning, each shard can be processed independently of others, and progress without additional synchronization between the shards. This allows you to linearly scale out your streaming pipeline.

      You can specify the number of partitions the stream should be split into by using a newly introduced keyword INTO after a PARTITION BY statement, with a strictly positive integer that indicates the partition count. Please see below for an example:

      SELECT * INTO [output] FROM [input] PARTITION BY DeviceID INTO 10

      The query above will read from the input, regardless of it being naturally partitioned, and repartition the stream tenfold according to the DeviceID dimension and flush the data to output. Hashing of the dimension value (DeviceID) is used to determine which partition shall accept which substream. The data will be flushed independently for each partitioned stream, assuming the output supports partitioned writes, and either has 10 partitions, or can handle an arbitrary number of such.

      A diagram of the data flow with the repartition in place is below:

      Diagram of the data flow with the repartition in place

      Why and how to use repartitioning?

      Use repartitioning to optimize the heavy parts of processing. It will process the data independently and simultaneously on disjoint subsets, even when the data is not naturally partitioned properly on input. The partitioning scheme is carried forward as long as the partition key stays the same.

      Experiment and observe the resource utilization of your job to determine the exact number of partitions needed. Remember, Streaming Unit (SU) count, which is the unit of scale for Azure Stream Analytics, must be adjusted so the number of physical resources available to the job can fit the partitioned flow. In general, six SUs is a good number to assign to each partition. In case there are insufficient resources assigned to the job, the system will only apply the repartition if it benefits the job.

      When joining two streams of data explicitly repartitioned, these streams must have the same partition key and partition count. The outcome is a stream that has the same partition scheme. Please see below for an example:

      WITH step1 AS (SELECT * FROM [input1] PARTITION BY DeviceID INTO 10),
           step2 AS (SELECT * FROM [input2] PARTITION BY DeviceID INTO 10)
      
      SELECT * INTO [output] FROM step1 PARTITION BY DeviceID UNION step2 PARTITION BY DeviceID

      Specifying a mismatching number of partitions or partition key would yield a compilation error when creating the job.

      When writing a partitioned stream to an output, it works best if the output scheme matches the stream scheme by key and count, so each substream can be flushed independently of others. Alternatively, the stream must be merged and possibly repartitioned again by a different scheme before flushing. This would add to the general latency of the processing, as well as the resource utilization and should be avoided.

      For use cases with SQL output, use explicit repartitioning to match optimal partition count to maximize throughput. Since SQL works best with eight writers, repartitioning the flow to eight before flushing, or somewhere further upstream, may prove beneficial for the job’s performance. For more information, please refer to the documentation, “Azure Stream Analytics output to Azure SQL Database.”

      Next steps

      Get started with Azure Stream Analytics and have a look at our documentation to understand how to leverage query parallelization in Azure Stream Analytics.

      For any question, join the conversation on Stack Overflow.

      Break When Value Changes: Data Breakpoints for .NET Core in Visual Studio 2019

      $
      0
      0

      “Why is this value changing unexpectedly and where or when is this occurring?!

      This is a question many of us dread asking ourselves, knowing that we’ll have to do some tedious trial-and-error debugging  to locate the source of this issue.  For C++ developers, the exclusive solution to this problem has been the data breakpoint, a debugging tool allowing you to break when a specific object’s property changes.  Fortunately, data breakpoints are no longer a C++ exclusive because they are now available for .NET Core (3.0 or higher) in Visual Studio 2019 Preview 2!

      Data breakpoints for managed code were a long-requested ask for many of you. They are a great alternative to simply placing a breakpoint on a property’s setter because a data breakpoint focuses on a specific object’s property even when it’s out of scope, whereas the former option may result in constant, irrelevant breaks if you have hundreds of objects calling that function.

      How do I set a data breakpoint?

      Setting a data breakpoint is as easy as right-clicking on the property you’re interested in watching inside the watch, autos, or locals window and selecting “Break when value changes” in the context menu.  All data breakpoints are displayed in the Breakpoints window. They are also represented by the standard, red breakpoint circle next to the specified property.

      Setting a data breakpoint in the Locals window and viewing the breakpoint in the Breakpoints window

      When can I use data breakpoints?

      Now that you know how to set a data breakpoint, now what?  Here are some ways to take advantage of data breakpoints when debugging your .NET Core applications.

      Let’s say that you want to figure out who is modifying a property in an object and for most of the time, this property change does not happen in the same file. By setting a data breakpoint on the property of interest and continuing, the data breakpoint will stop at the line after the property has been modified.

      Break when _data value changes

      This also works for objects. The data breakpoint will stop when the property referencing the object changes value, not when the contents of the object change.

      Break when the property referencing an object changes

      As illustrated in the GIF above, calling the toEdit._artist.ChangeName() function did not cause a breakpoint to hit since it was modifying a property (Name) inside the Song’s Artist property.  In contrast, the data breakpoint is hit when the _artist property is assigned a reference to a new object.

      Data breakpoints are also useful when you want to know when something is added or removed from a collection. Setting a data breakpoint on the ‘Count’ field of classes from System.Collections.Generic makes it easy to detect when the collection has changed.

      Break when an object is added or removed from a list

      Are there opportunities for improving managed data breakpoints?

      Since Visual Studio 2019 is still in preview, we highly encourage you to experiment, play around with, and provide feedback for this iteration of data breakpoints.  Here are some known scenarios where data breakpoints currently cannot be set that we are working on erasing and improving in future Visual Studio updates:

      • Properties that are not expandable in the tooltip, Locals, Autos, or Watch window
      • Static variables
      • Classes with the DebuggerTypeProxy Attribute
      • Fields inside of structs

      Managed data breakpoints also exclude properties that call native code and properties that depend on too many fields.

      Ready to try data breakpoints in your .NET Core applications?  Let us know in the comments!

      For any issues or suggestions about this feature, please let us know via Help > Send Feedback > Report a Problem in the IDE or in the Developer Community.

      Leslie Richardson, Program Manager, Visual Studio Debugging & Diagnostics
      @lyrichardson01

      Leslie is a Program Manager on the Visual Studio Debugging and Diagnostics team, focusing primarily on improving the overall debugging experience and feature set.

      Announcing launch of Azure Pipelines app for Slack

      $
      0
      0
      I am excited to announce the availability of the Azure Pipelines app for Slack. If you use Slack, you can use the Azure Pipelines app for Slack to easily monitor the events for your pipelines. Set up and manage subscriptions for completed builds, releases, pending approvals and more from the app and get notifications for these events in your Slack channels.

      For details, please take a look at the documentation here.
      To install and add the app to your workspace, click here.

      We plan to continue improving the app on a regular basis. Ability to take actions from the notifications (like Approve a deployment) is next on our list. Please give the app a try and send us your feedback using the /azpipelines feedback command in the app or on the Azure DevOps feedback portal

      February Security Release: Team Foundation Server 2018 Update 3.2 Patch 1 is available

      $
      0
      0

      We announced the Azure DevOps Bounty Program a few weeks ago. We’re excited that this effort has already helped us on our mission to provide the highest level of security for our customers. Thanks to everyone who is participating in the Bounty program.

      We plan to release security updates on the second Tuesday of each month (Patch Tuesday). This will give our customers a predictable and regular cadence that lines up with other security releases from Microsoft. When the updates involve binary changes, our releases will only replace the impacted binaries. If the updates involve database changes, we will release full installations.

      TFS 2018 Update 3.2 Patch 1
      Today, we released Team Foundation Server 2018 Update 3.2 Patch 1 that fixes two cross site scripting vulnerabilities found through the Bounty program:
      CVE-2019-0742: Cross site scripting (XSS) vulnerability in work items
      CVE-2019-0743: Cross site scripting (XSS) vulnerability in pull requests

      TFS 2018 Update 2 and Update 3 are impacted by these vulnerabilities. Azure DevOps Server 2019 RC2 is also impacted and will be fixed in the final release of Azure DevOps Server 2019. We recommend that all customers on TFS 2018 Update 2 or Update 3 upgrade to TFS 2018 Update 3.2 and apply TFS 2018 Update 3.2 Patch 1.

      Verifying Installation
      To verify if you have this update installed, you can check the versions of the following file:
      [TFS_INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.WorkItemTracking.Web.dll

      TFS 2018 is installed to c:Program FilesMicrosoft Team Foundation Server 2018 by default.

      After installing TFS 2018 Update 3.2 Patch 1, the version will be 16.131.28605.6.


      Windows 10 SDK Preview Build 18334 available now!

      $
      0
      0

      Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18334 or greater). The Preview SDK Build 18334 contains bug fixes and under development changes to the API surface area.

      The Preview SDK can be downloaded from developer section on Windows Insider.

      For feedback and updates to the known issues, please see the developer forum.  For new developer feature requests, head over to our Windows Platform UserVoice.

      Things to note:

      • This build works in conjunction with previously released SDKs and Visual Studio 2017.  You can install this SDK and still also continue to submit your apps that target Windows 10 build 1809 or earlier to the store.
      • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
      • This build of the Windows SDK will install ONLY on Windows 10 Insider Preview builds.
      • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following once the static URL is published.
      • URL:  https://go.microsoft.com/fwlink/?prd=11966&pver=1.0&plcid=0x409&clcid=0x409&ar=Flight&sar=Sdsurl&o1=18334

      Tools Updates

      Message Compiler (mc.exe)

      • The “-mof” switch (to generate XP-compatible ETW helpers) is considered to be deprecated and will be removed in a future version of mc.exe. Removing this switch will cause the generated ETW helpers to expect Vista or later.
      • The “-A” switch (to generate .BIN files using ANSI encoding instead of Unicode) is considered to be deprecated and will be removed in a future version of mc.exe. Removing this switch will cause the generated .BIN files to use Unicode string encoding.
      • The behavior of the “-A” switch has changed. Prior to Windows 1607 Anniversary Update SDK, when using the -A switch, BIN files were encoded using the build system’s ANSI code page. In the Windows 1607 Anniversary Update SDK, mc.exe’s behavior was inadvertently changed to encode BIN files using the build system’s OEM code page. In the 19H1 SDK, mc.exe’s previous behavior has been restored and it now encodes BIN files using the build system’s ANSI code page. Note that the -A switch is deprecated, as ANSI-encoded BIN files do not provide a consistent user experience in multi-lingual systems.

      Breaking Changes

      Change to effect graph of the AcrylicBrush

      In this Preview SDK we’ll be adding a blend mode to the effect graph of the AcrylicBrush called Luminosity. This blend mode will ensure that shadows do not appear behind acrylic surfaces without a cutout. We will also be exposing a LuminosityBlendOpacity API available for tweaking that allows for more AcrylicBrush customization.

      By default, for those that have not specified any LuminosityBlendOpacity on their AcrylicBrushes, we have implemented some logic to ensure that the Acrylic will look as similar as it can to current 1809 acrylics. Please note that we will be updating our default brushes to account for this recipe change.

      TraceLoggingProvider.h  / TraceLoggingWrite

      Events generated by TraceLoggingProvider.h (e.g. via TraceLoggingWrite macros) will now always have Id and Version set to 0.

      Previously, TraceLoggingProvider.h would assign IDs to events at link time. These IDs were unique within a DLL or EXE, but changed from build to build and from module to module.

      API Updates, Additions and Removals

      Note: The class  PushNotificationChannelsRevokedEventArgs which was available in previous flights has been removed.

      Additions:

       

       
       
      namespace Windows.AI.MachineLearning {
        public sealed class LearningModelSession : IClosable {
          public LearningModelSession(LearningModel model, LearningModelDevice deviceToRunOn, LearningModelSessionOptions learningModelSessionOptions);
        }
        public sealed class LearningModelSessionOptions
        public sealed class TensorBoolean : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorBoolean CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorBoolean CreateFromShapeArrayAndDataArray(long[] shape, bool[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorDouble : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorDouble CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorDouble CreateFromShapeArrayAndDataArray(long[] shape, double[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorFloat : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorFloat CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorFloat CreateFromShapeArrayAndDataArray(long[] shape, float[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorFloat16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorFloat16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorFloat16Bit CreateFromShapeArrayAndDataArray(long[] shape, float[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorInt16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorInt16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorInt16Bit CreateFromShapeArrayAndDataArray(long[] shape, short[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorInt32Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorInt32Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorInt32Bit CreateFromShapeArrayAndDataArray(long[] shape, int[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorInt64Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorInt64Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorInt64Bit CreateFromShapeArrayAndDataArray(long[] shape, long[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorInt8Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorInt8Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorInt8Bit CreateFromShapeArrayAndDataArray(long[] shape, byte[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorString : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorString CreateFromShapeArrayAndDataArray(long[] shape, string[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorUInt16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorUInt16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorUInt16Bit CreateFromShapeArrayAndDataArray(long[] shape, ushort[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorUInt32Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorUInt32Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorUInt32Bit CreateFromShapeArrayAndDataArray(long[] shape, uint[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorUInt64Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorUInt64Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorUInt64Bit CreateFromShapeArrayAndDataArray(long[] shape, ulong[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorUInt8Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorUInt8Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorUInt8Bit CreateFromShapeArrayAndDataArray(long[] shape, byte[] data);
          IMemoryBufferReference CreateReference();
        }
      }
      namespace Windows.ApplicationModel {
        public sealed class Package {
          StorageFolder EffectiveLocation { get; }
          StorageFolder MutableLocation { get; }
        }
      }
      namespace Windows.ApplicationModel.AppService {
        public sealed class AppServiceConnection : IClosable {
          public static IAsyncOperation<StatelessAppServiceResponse> SendStatelessMessageAsync(AppServiceConnection connection, RemoteSystemConnectionRequest connectionRequest, ValueSet message);
        }
        public sealed class AppServiceTriggerDetails {
          string CallerRemoteConnectionToken { get; }
        }
        public sealed class StatelessAppServiceResponse
        public enum StatelessAppServiceResponseStatus
      }
      namespace Windows.ApplicationModel.Background {
        public sealed class ConversationalAgentTrigger : IBackgroundTrigger
      }
      namespace Windows.ApplicationModel.Calls {
        public sealed class PhoneLine {
          string TransportDeviceId { get; }
          void EnableTextReply(bool value);
        }
        public enum PhoneLineTransport {
          Bluetooth = 2,
        }
        public sealed class PhoneLineTransportDevice
      }
      namespace Windows.ApplicationModel.Calls.Background {
        public enum PhoneIncomingCallDismissedReason
        public sealed class PhoneIncomingCallDismissedTriggerDetails
        public enum PhoneTriggerType {
          IncomingCallDismissed = 6,
        }
      }
      namespace Windows.ApplicationModel.Calls.Provider {
        public static class PhoneCallOriginManager {
          public static bool IsSupported { get; }
        }
      }
      namespace Windows.ApplicationModel.ConversationalAgent {
        public sealed class ConversationalAgentSession : IClosable
        public sealed class ConversationalAgentSessionInterruptedEventArgs
        public enum ConversationalAgentSessionUpdateResponse
        public sealed class ConversationalAgentSignal
        public sealed class ConversationalAgentSignalDetectedEventArgs
        public enum ConversationalAgentState
        public sealed class ConversationalAgentSystemStateChangedEventArgs
        public enum ConversationalAgentSystemStateChangeType
      }
      namespace Windows.ApplicationModel.Preview.Holographic {
        public sealed class HolographicKeyboardPlacementOverridePreview
      }
      namespace Windows.ApplicationModel.Resources {
        public sealed class ResourceLoader {
          public static ResourceLoader GetForUIContext(UIContext context);
        }
      }
      namespace Windows.ApplicationModel.Resources.Core {
        public sealed class ResourceCandidate {
          ResourceCandidateKind Kind { get; }
        }
        public enum ResourceCandidateKind
        public sealed class ResourceContext {
          public static ResourceContext GetForUIContext(UIContext context);
        }
      }
      namespace Windows.ApplicationModel.UserActivities {
        public sealed class UserActivityChannel {
          public static UserActivityChannel GetForUser(User user);
        }
      }
      namespace Windows.Devices.Bluetooth.GenericAttributeProfile {
        public enum GattServiceProviderAdvertisementStatus {
          StartedWithoutAllAdvertisementData = 4,
        }
        public sealed class GattServiceProviderAdvertisingParameters {
          IBuffer ServiceData { get; set; }
        }
      }
      namespace Windows.Devices.Enumeration {
        public enum DevicePairingKinds : uint {
          ProvidePasswordCredential = (uint)16,
        }
        public sealed class DevicePairingRequestedEventArgs {
          void AcceptWithPasswordCredential(PasswordCredential passwordCredential);
        }
      }
      namespace Windows.Devices.Input {
        public sealed class PenDevice
      }
      namespace Windows.Devices.PointOfService {
        public sealed class JournalPrinterCapabilities : ICommonPosPrintStationCapabilities {
          bool IsReversePaperFeedByLineSupported { get; }
          bool IsReversePaperFeedByMapModeUnitSupported { get; }
          bool IsReverseVideoSupported { get; }
          bool IsStrikethroughSupported { get; }
          bool IsSubscriptSupported { get; }
          bool IsSuperscriptSupported { get; }
        }
        public sealed class JournalPrintJob : IPosPrinterJob {
          void FeedPaperByLine(int lineCount);
          void FeedPaperByMapModeUnit(int distance);
          void Print(string data, PosPrinterPrintOptions printOptions);
        }
        public sealed class PosPrinter : IClosable {
          IVectorView<uint> SupportedBarcodeSymbologies { get; }
          PosPrinterFontProperty GetFontProperty(string typeface);
        }
        public sealed class PosPrinterFontProperty
        public sealed class PosPrinterPrintOptions
        public sealed class ReceiptPrinterCapabilities : ICommonPosPrintStationCapabilities, ICommonReceiptSlipCapabilities {
          bool IsReversePaperFeedByLineSupported { get; }
          bool IsReversePaperFeedByMapModeUnitSupported { get; }
          bool IsReverseVideoSupported { get; }
          bool IsStrikethroughSupported { get; }
          bool IsSubscriptSupported { get; }
          bool IsSuperscriptSupported { get; }
        }
        public sealed class ReceiptPrintJob : IPosPrinterJob, IReceiptOrSlipJob {
          void FeedPaperByLine(int lineCount);
          void FeedPaperByMapModeUnit(int distance);
          void Print(string data, PosPrinterPrintOptions printOptions);
          void StampPaper();
        }
        public struct SizeUInt32
        public sealed class SlipPrinterCapabilities : ICommonPosPrintStationCapabilities, ICommonReceiptSlipCapabilities {
          bool IsReversePaperFeedByLineSupported { get; }
          bool IsReversePaperFeedByMapModeUnitSupported { get; }
          bool IsReverseVideoSupported { get; }
          bool IsStrikethroughSupported { get; }
          bool IsSubscriptSupported { get; }
          bool IsSuperscriptSupported { get; }
        }
        public sealed class SlipPrintJob : IPosPrinterJob, IReceiptOrSlipJob {
          void FeedPaperByLine(int lineCount);
          void FeedPaperByMapModeUnit(int distance);
          void Print(string data, PosPrinterPrintOptions printOptions);
        }
      }
      namespace Windows.Globalization {
        public sealed class CurrencyAmount
      }
      namespace Windows.Graphics.DirectX {
        public enum DirectXPrimitiveTopology
      }
      namespace Windows.Graphics.Holographic {
        public sealed class HolographicCamera {
          HolographicViewConfiguration ViewConfiguration { get; }
        }
        public sealed class HolographicDisplay {
          HolographicViewConfiguration TryGetViewConfiguration(HolographicViewConfigurationKind kind);
        }
        public sealed class HolographicViewConfiguration
        public enum HolographicViewConfigurationKind
      }
      namespace Windows.Management.Deployment {
        public enum AddPackageByAppInstallerOptions : uint {
          LimitToExistingPackages = (uint)512,
        }
        public enum DeploymentOptions : uint {
          RetainFilesOnFailure = (uint)2097152,
        }
      }
      namespace Windows.Media.Devices {
        public sealed class InfraredTorchControl
        public enum InfraredTorchMode
        public sealed class VideoDeviceController : IMediaDeviceController {
          InfraredTorchControl InfraredTorchControl { get; }
        }
      }
      namespace Windows.Media.Miracast {
        public sealed class MiracastReceiver
        public sealed class MiracastReceiverApplySettingsResult
        public enum MiracastReceiverApplySettingsStatus
        public enum MiracastReceiverAuthorizationMethod
        public sealed class MiracastReceiverConnection : IClosable
        public sealed class MiracastReceiverConnectionCreatedEventArgs
        public sealed class MiracastReceiverCursorImageChannel
        public sealed class MiracastReceiverCursorImageChannelSettings
        public sealed class MiracastReceiverDisconnectedEventArgs
        public enum MiracastReceiverDisconnectReason
        public sealed class MiracastReceiverGameControllerDevice
        public enum MiracastReceiverGameControllerDeviceUsageMode
        public sealed class MiracastReceiverInputDevices
        public sealed class MiracastReceiverKeyboardDevice
        public enum MiracastReceiverListeningStatus
        public sealed class MiracastReceiverMediaSourceCreatedEventArgs
        public sealed class MiracastReceiverSession : IClosable
        public sealed class MiracastReceiverSessionStartResult
        public enum MiracastReceiverSessionStartStatus
        public sealed class MiracastReceiverSettings
        public sealed class MiracastReceiverStatus
        public sealed class MiracastReceiverStreamControl
        public sealed class MiracastReceiverVideoStreamSettings
        public enum MiracastReceiverWiFiStatus
        public sealed class MiracastTransmitter
        public enum MiracastTransmitterAuthorizationStatus
      }
      namespace Windows.Networking.Connectivity {
        public enum NetworkAuthenticationType {
          Wpa3 = 10,
          Wpa3Sae = 11,
        }
      }
      namespace Windows.Networking.NetworkOperators {
        public sealed class ESim {
          ESimDiscoverResult Discover();
          ESimDiscoverResult Discover(string serverAddress, string matchingId);
          IAsyncOperation<ESimDiscoverResult> DiscoverAsync();
          IAsyncOperation<ESimDiscoverResult> DiscoverAsync(string serverAddress, string matchingId);
        }
        public sealed class ESimDiscoverEvent
        public sealed class ESimDiscoverResult
        public enum ESimDiscoverResultKind
      }
      namespace Windows.Perception.People {
        public sealed class EyesPose
        public enum HandJointKind
        public sealed class HandMeshObserver
        public struct HandMeshVertex
        public sealed class HandMeshVertexState
        public sealed class HandPose
        public struct JointPose
        public enum JointPoseAccuracy
      }
      namespace Windows.Perception.Spatial {
        public struct SpatialRay
      }
      namespace Windows.Perception.Spatial.Preview {
        public sealed class SpatialGraphInteropFrameOfReferencePreview
        public static class SpatialGraphInteropPreview {
          public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem);
          public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem, Vector3 relativePosition);
          public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem, Vector3 relativePosition, Quaternion relativeOrientation);
        }
      }
      namespace Windows.Security.Authorization.AppCapabilityAccess {
        public sealed class AppCapability
        public sealed class AppCapabilityAccessChangedEventArgs
        public enum AppCapabilityAccessStatus
      }
      namespace Windows.Security.DataProtection {
        public enum UserDataAvailability
        public sealed class UserDataAvailabilityStateChangedEventArgs
        public sealed class UserDataBufferUnprotectResult
        public enum UserDataBufferUnprotectStatus
        public sealed class UserDataProtectionManager
        public sealed class UserDataStorageItemProtectionInfo
        public enum UserDataStorageItemProtectionStatus
      }
      namespace Windows.Storage.AccessCache {
        public static class StorageApplicationPermissions {
          public static StorageItemAccessList GetFutureAccessListForUser(User user);
          public static StorageItemMostRecentlyUsedList GetMostRecentlyUsedListForUser(User user);
        }
      }
      namespace Windows.Storage.Pickers {
        public sealed class FileOpenPicker {
          User User { get; }
          public static FileOpenPicker CreateForUser(User user);
        }
        public sealed class FileSavePicker {
          User User { get; }
          public static FileSavePicker CreateForUser(User user);
        }
        public sealed class FolderPicker {
          User User { get; }
          public static FolderPicker CreateForUser(User user);
        }
      }
      namespace Windows.System {
        public sealed class DispatcherQueue {
          bool HasThreadAccess { get; }
        }
        public enum ProcessorArchitecture {
          Arm64 = 12,
          X86OnArm64 = 14,
        }
      }
      namespace Windows.System.Profile {
        public static class AppApplicability
        public sealed class UnsupportedAppRequirement
        public enum UnsupportedAppRequirementReasons : uint
      }
      namespace Windows.System.RemoteSystems {
        public sealed class RemoteSystem {
          User User { get; }
          public static RemoteSystemWatcher CreateWatcherForUser(User user);
          public static RemoteSystemWatcher CreateWatcherForUser(User user, IIterable<IRemoteSystemFilter> filters);
        }
        public sealed class RemoteSystemApp {
          string ConnectionToken { get; }
          User User { get; }
        }
        public sealed class RemoteSystemConnectionRequest {
          string ConnectionToken { get; }
          public static RemoteSystemConnectionRequest CreateFromConnectionToken(string connectionToken);
          public static RemoteSystemConnectionRequest CreateFromConnectionTokenForUser(User user, string connectionToken);
        }
        public sealed class RemoteSystemWatcher {
          User User { get; }
        }
      }
      namespace Windows.UI {
        public sealed class UIContentRoot
        public sealed class UIContext
      }
      namespace Windows.UI.Composition {
        public enum CompositionBitmapInterpolationMode {
          MagLinearMinLinearMipLinear = 2,
          MagLinearMinLinearMipNearest = 3,
          MagLinearMinNearestMipLinear = 4,
          MagLinearMinNearestMipNearest = 5,
          MagNearestMinLinearMipLinear = 6,
          MagNearestMinLinearMipNearest = 7,
          MagNearestMinNearestMipLinear = 8,
          MagNearestMinNearestMipNearest = 9,
        }
        public sealed class CompositionGraphicsDevice : CompositionObject {
          CompositionMipmapSurface CreateMipmapSurface(SizeInt32 sizePixels, DirectXPixelFormat pixelFormat, DirectXAlphaMode alphaMode);
        }
        public sealed class CompositionMipmapSurface : CompositionObject, ICompositionSurface
        public sealed class CompositionProjectedShadow : CompositionObject
        public sealed class CompositionProjectedShadowCaster : CompositionObject
        public sealed class CompositionProjectedShadowCasterCollection : CompositionObject, IIterable<CompositionProjectedShadowCaster>
        public enum CompositionProjectedShadowDrawOrder
        public sealed class CompositionProjectedShadowReceiver : CompositionObject
        public sealed class CompositionProjectedShadowReceiverUnorderedCollection : CompositionObject, IIterable<CompositionProjectedShadowReceiver>
        public sealed class CompositionRadialGradientBrush : CompositionGradientBrush
        public sealed class CompositionSurfaceBrush : CompositionBrush {
          bool SnapToPixels { get; set; }
        }
        public class CompositionTransform : CompositionObject
        public sealed class CompositionVisualSurface : CompositionObject, ICompositionSurface
        public sealed class Compositor : IClosable {
          CompositionProjectedShadow CreateProjectedShadow();
          CompositionProjectedShadowCaster CreateProjectedShadowCaster();
          CompositionProjectedShadowReceiver CreateProjectedShadowReceiver();
          CompositionRadialGradientBrush CreateRadialGradientBrush();
          CompositionVisualSurface CreateVisualSurface();
        }
        public interface ICompositorPartner_ProjectedShadow
        public interface IVisualElement
      }
      namespace Windows.UI.Composition.Interactions {
        public enum InteractionBindingAxisModes : uint
        public sealed class InteractionTracker : CompositionObject {
          public static InteractionBindingAxisModes GetBindingMode(InteractionTracker boundTracker1, InteractionTracker boundTracker2);
          public static void SetBindingMode(InteractionTracker boundTracker1, InteractionTracker boundTracker2, InteractionBindingAxisModes axisMode);
        }
        public sealed class InteractionTrackerCustomAnimationStateEnteredArgs {
          bool IsFromBinding { get; }
        }
        public sealed class InteractionTrackerIdleStateEnteredArgs {
          bool IsFromBinding { get; }
        }
        public sealed class InteractionTrackerInertiaStateEnteredArgs {
          bool IsFromBinding { get; }
        }
        public sealed class InteractionTrackerInteractingStateEnteredArgs {
          bool IsFromBinding { get; }
        }
        public class VisualInteractionSource : CompositionObject, ICompositionInteractionSource {
          public static VisualInteractionSource CreateFromIVisualElement(IVisualElement source);
        }
      }
      namespace Windows.UI.Composition.Scenes {
        public enum SceneAlphaMode
        public enum SceneAttributeSemantic
        public sealed class SceneBoundingBox : SceneObject
        public class SceneComponent : SceneObject
        public sealed class SceneComponentCollection : SceneObject, IIterable<SceneComponent>, IVector<SceneComponent>
        public enum SceneComponentType
        public class SceneMaterial : SceneObject
        public class SceneMaterialInput : SceneObject
        public sealed class SceneMesh : SceneObject
        public sealed class SceneMeshMaterialAttributeMap : SceneObject, IIterable<IKeyValuePair<string, SceneAttributeSemantic>>, IMap<string, SceneAttributeSemantic>
        public sealed class SceneMeshRendererComponent : SceneRendererComponent
        public sealed class SceneMetallicRoughnessMaterial : ScenePbrMaterial
        public sealed class SceneModelTransform : CompositionTransform
        public sealed class SceneNode : SceneObject
       public sealed class SceneNodeCollection : SceneObject, IIterable<SceneNode>, IVector<SceneNode>
        public class SceneObject : CompositionObject
        public class ScenePbrMaterial : SceneMaterial
        public class SceneRendererComponent : SceneComponent
        public sealed class SceneSurfaceMaterialInput : SceneMaterialInput
        public sealed class SceneVisual : ContainerVisual
        public enum SceneWrappingMode
      }
      namespace Windows.UI.Core {
        public sealed class CoreWindow : ICorePointerRedirector, ICoreWindow {
          UIContext UIContext { get; }
        }
      }
      namespace Windows.UI.Core.Preview {
        public sealed class CoreAppWindowPreview
      }
      namespace Windows.UI.Input {
        public class AttachableInputObject : IClosable
        public enum GazeInputAccessStatus
        public sealed class InputActivationListener : AttachableInputObject
        public sealed class InputActivationListenerActivationChangedEventArgs
        public enum InputActivationState
      }
      namespace Windows.UI.Input.Preview {
        public static class InputActivationListenerPreview
      }
      namespace Windows.UI.Input.Spatial {
        public sealed class SpatialInteractionManager {
          public static bool IsSourceKindSupported(SpatialInteractionSourceKind kind);
        }
        public sealed class SpatialInteractionSource {
          HandMeshObserver TryCreateHandMeshObserver();
          IAsyncOperation<HandMeshObserver> TryCreateHandMeshObserverAsync();
        }
        public sealed class SpatialInteractionSourceState {
          HandPose TryGetHandPose();
        }
        public sealed class SpatialPointerPose {
          EyesPose Eyes { get; }
          bool IsHeadCapturedBySystem { get; }
        }
      }
      namespace Windows.UI.Notifications {
        public sealed class ToastActivatedEventArgs {
          ValueSet UserInput { get; }
        }
        public sealed class ToastNotification {
          bool ExpiresOnReboot { get; set; }
        }
      }
      namespace Windows.UI.ViewManagement {
        public sealed class ApplicationView {
          string PersistedStateId { get; set; }
          UIContext UIContext { get; }
          WindowingEnvironment WindowingEnvironment { get; }
          public static void ClearAllPersistedState();
          public static void ClearPersistedState(string key);
          IVectorView<DisplayRegion> GetDisplayRegions();
        }
        public sealed class InputPane {
          public static InputPane GetForUIContext(UIContext context);
        }
        public sealed class UISettings {
          bool AutoHideScrollBars { get; }
          event TypedEventHandler<UISettings, UISettingsAutoHideScrollBarsChangedEventArgs> AutoHideScrollBarsChanged;
        }
        public sealed class UISettingsAutoHideScrollBarsChangedEventArgs
      }
      namespace Windows.UI.ViewManagement.Core {
        public sealed class CoreInputView {
          public static CoreInputView GetForUIContext(UIContext context);
        }
      }
      namespace Windows.UI.WindowManagement {
        public sealed class AppWindow
        public sealed class AppWindowChangedEventArgs
        public sealed class AppWindowClosedEventArgs
        public enum AppWindowClosedReason
        public sealed class AppWindowCloseRequestedEventArgs
        public sealed class AppWindowFrame
        public enum AppWindowFrameStyle
        public sealed class AppWindowPlacement
        public class AppWindowPresentationConfiguration
        public enum AppWindowPresentationKind
        public sealed class AppWindowPresenter
        public sealed class AppWindowTitleBar
        public sealed class AppWindowTitleBarOcclusion
        public enum AppWindowTitleBarVisibility
        public sealed class CompactOverlayPresentationConfiguration : AppWindowPresentationConfiguration
        public sealed class DefaultPresentationConfiguration : AppWindowPresentationConfiguration
        public sealed class DisplayRegion
        public sealed class FullScreenPresentationConfiguration : AppWindowPresentationConfiguration
        public sealed class WindowingEnvironment
        public sealed class WindowingEnvironmentAddedEventArgs
        public sealed class WindowingEnvironmentChangedEventArgs
        public enum WindowingEnvironmentKind
        public sealed class WindowingEnvironmentRemovedEventArgs
      }
      namespace Windows.UI.WindowManagement.Preview {
        public sealed class WindowManagementPreview
      }
      namespace Windows.UI.Xaml {
        public class UIElement : DependencyObject, IAnimationObject, IVisualElement {
          Vector3 ActualOffset { get; }
          Vector2 ActualSize { get; }
          Shadow Shadow { get; set; }
          public static DependencyProperty ShadowProperty { get; }
          UIContext UIContext { get; }
          XamlRoot XamlRoot { get; set; }
        }
        public class UIElementWeakCollection : IIterable<UIElement>, IVector<UIElement>
        public sealed class Window {
          UIContext UIContext { get; }
        }
        public sealed class XamlRoot
        public sealed class XamlRootChangedEventArgs
      }
      namespace Windows.UI.Xaml.Controls {
        public sealed class DatePickerFlyoutPresenter : Control {
          bool IsDefaultShadowEnabled { get; set; }
          public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
        }
        public class FlyoutPresenter : ContentControl {
          bool IsDefaultShadowEnabled { get; set; }
          public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
        }
        public class InkToolbar : Control {
          InkPresenter TargetInkPresenter { get; set; }
          public static DependencyProperty TargetInkPresenterProperty { get; }
        }
        public class MenuFlyoutPresenter : ItemsControl {
          bool IsDefaultShadowEnabled { get; set; }
          public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
        }
        public sealed class TimePickerFlyoutPresenter : Control {
          bool IsDefaultShadowEnabled { get; set; }
          public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
        }
        public class TwoPaneView : Control
        public enum TwoPaneViewMode
        public enum TwoPaneViewPriority
        public enum TwoPaneViewTallModeConfiguration
        public enum TwoPaneViewWideModeConfiguration
      }
      namespace Windows.UI.Xaml.Controls.Maps {
        public sealed class MapControl : Control {
          bool CanTiltDown { get; }
          public static DependencyProperty CanTiltDownProperty { get; }
          bool CanTiltUp { get; }
          public static DependencyProperty CanTiltUpProperty { get; }
          bool CanZoomIn { get; }
          public static DependencyProperty CanZoomInProperty { get; }
          bool CanZoomOut { get; }
          public static DependencyProperty CanZoomOutProperty { get; }
        }
        public enum MapLoadingStatus {
          DownloadedMapsManagerUnavailable = 3,
        }
      }
      namespace Windows.UI.Xaml.Controls.Primitives {
        public sealed class AppBarTemplateSettings : DependencyObject {
          double NegativeCompactVerticalDelta { get; }
          double NegativeHiddenVerticalDelta { get; }
          double NegativeMinimalVerticalDelta { get; }
        }
        public sealed class CommandBarTemplateSettings : DependencyObject {
          double OverflowContentCompactYTranslation { get; }
          double OverflowContentHiddenYTranslation { get; }
          double OverflowContentMinimalYTranslation { get; }
        }
        public class FlyoutBase : DependencyObject {
          bool IsConstrainedToRootBounds { get; }
          bool ShouldConstrainToRootBounds { get; set; }
          public static DependencyProperty ShouldConstrainToRootBoundsProperty { get; }
          XamlRoot XamlRoot { get; set; }
        }
        public sealed class Popup : FrameworkElement {
          bool IsConstrainedToRootBounds { get; }
          bool ShouldConstrainToRootBounds { get; set; }
          public static DependencyProperty ShouldConstrainToRootBoundsProperty { get; }
        }
      }
      namespace Windows.UI.Xaml.Core.Direct {
        public enum XamlPropertyIndex {
          AppBarTemplateSettings_NegativeCompactVerticalDelta = 2367,
          AppBarTemplateSettings_NegativeHiddenVerticalDelta = 2368,
          AppBarTemplateSettings_NegativeMinimalVerticalDelta = 2369,
          CommandBarTemplateSettings_OverflowContentCompactYTranslation = 2384,
          CommandBarTemplateSettings_OverflowContentHiddenYTranslation = 2385,
          CommandBarTemplateSettings_OverflowContentMinimalYTranslation = 2386,
          FlyoutBase_ShouldConstrainToRootBounds = 2378,
          FlyoutPresenter_IsDefaultShadowEnabled = 2380,
          MenuFlyoutPresenter_IsDefaultShadowEnabled = 2381,
          Popup_ShouldConstrainToRootBounds = 2379,
          ThemeShadow_Receivers = 2279,
          UIElement_ActualOffset = 2382,
          UIElement_ActualSize = 2383,
          UIElement_Shadow = 2130,
        }
        public enum XamlTypeIndex {
          ThemeShadow = 964,
        }
      }
      namespace Windows.UI.Xaml.Documents {
        public class TextElement : DependencyObject {
          XamlRoot XamlRoot { get; set; }
        }
      }
      namespace Windows.UI.Xaml.Hosting {
        public sealed class ElementCompositionPreview {
          public static UIElement GetAppWindowContent(AppWindow appWindow);
          public static void SetAppWindowContent(AppWindow appWindow, UIElement xamlContent);
        }
      }
      namespace Windows.UI.Xaml.Input {
        public sealed class FocusManager {
          public static object GetFocusedElement(XamlRoot xamlRoot);
        }
        public class StandardUICommand : XamlUICommand {
          StandardUICommandKind Kind { get; set; }
        }
      }
      namespace Windows.UI.Xaml.Media {
        public class AcrylicBrush : XamlCompositionBrushBase {
          IReference<double> TintLuminosityOpacity { get; set; }
          public static DependencyProperty TintLuminosityOpacityProperty { get; }
        }
        public class Shadow : DependencyObject
        public class ThemeShadow : Shadow
        public sealed class VisualTreeHelper {
          public static IVectorView<Popup> GetOpenPopupsForXamlRoot(XamlRoot xamlRoot);
        }
      }
      namespace Windows.UI.Xaml.Media.Animation {
        public class GravityConnectedAnimationConfiguration : ConnectedAnimationConfiguration {
          bool IsShadowEnabled { get; set; }
        }
      }
      namespace Windows.Web.Http {
        public sealed class HttpClient : IClosable, IStringable {
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryDeleteAsync(Uri uri);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri, HttpCompletionOption completionOption);
          IAsyncOperationWithProgress<HttpGetBufferResult, HttpProgress> TryGetBufferAsync(Uri uri);
          IAsyncOperationWithProgress<HttpGetInputStreamResult, HttpProgress> TryGetInputStreamAsync(Uri uri);
          IAsyncOperationWithProgress<HttpGetStringResult, HttpProgress> TryGetStringAsync(Uri uri);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPostAsync(Uri uri, IHttpContent content);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPutAsync(Uri uri, IHttpContent content);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request, HttpCompletionOption completionOption);
        }
        public sealed class HttpGetBufferResult : IClosable, IStringable
        public sealed class HttpGetInputStreamResult : IClosable, IStringable
        public sealed class HttpGetStringResult : IClosable, IStringable
        public sealed class HttpRequestResult : IClosable, IStringable
      }
      namespace Windows.Web.Http.Filters {
        public sealed class HttpBaseProtocolFilter : IClosable, IHttpFilter {
          User User { get; }
          public static HttpBaseProtocolFilter CreateForUser(User user);
        }
      }
       
      

      The post Windows 10 SDK Preview Build 18334 available now! appeared first on Windows Developer Blog.

      .NET Core February 2019 Updates – 1.0.14, 1.1.11, 2.1.8 and 2.2.2

      $
      0
      0

      Today, we are releasing the .NET Core January 2019 Update. These updates contain security and reliability fixes. See the individual release notes for details on included reliability fixes.

      Security

      Microsoft Security Advisory CVE-2019-0657: .NET Core Domain Spoofing Vulnerability

      A domain spoofing vulnerability exists in .NET Framework and .NET Core which causes the meaning of a URI to change when International Domain Name encoding is applied. An attacker who successfully exploited the vulnerability could redirect a URI. This issue effects .NET Core 1.0, 1.1 2.1 and 2.2.

      The security update addresses the vulnerability by not allowing certain Unicode characters from the URI.

      Getting the Update

      The latest .NET Core updates are available on the .NET Core download page. This update is also included in the Visual Studio 15.0.21 (.NET Core 1.0 and 1.1) and 15.9.7 (.NET Core 1.0, 1.1 and 2.1) updates, which is also releasing today.

      See the .NET Core release notes ( 1.0.14 | 1.1.11 | 2.1.8 | 2.2.2 ) for details on the release including a issues fixed and affected packages.

      Docker Images

      .NET Docker images have been updated for today’s release. The following repos have been updated.

      microsoft/dotnet
      microsoft/dotnet-samples
      microsoft/aspnetcore

      Note: Look at the “Tags” view in each repository to see the updated Docker image tags.

      Note: You must re-pull base images in order to get updates. The Docker client does not pull updates automatically.

      Azure App Services deployment

      Deployment of these updates Azure App Services has begun. We’ll keep this section updated as the regions go live. Deployment to all regions is expected to complete in a few days.

      Lifecycle Information

      The following lifecycle documents describe Microsoft support policies and should be consulted to ensure that you are using supported .NET Core versions.

      Announcing launch of Azure Pipelines app for Slack

      $
      0
      0

      I am excited to announce the availability of the Azure Pipelines app for Slack. If you use Slack, you can use the Azure Pipelines app for Slack to easily monitor the events for your pipelines. Set up and manage subscriptions for completed builds, releases, pending approvals and more from the app and get notifications for these events in your Slack channels.

      For details, please take a look at the documentation here.

      To install the app to your workspace, click here.

      We plan to continue improving the app on a regular basis. Ability to take actions from the notifications (like Approve a deployment) is next on our list. Please give the app a try and send us your feedback using the /azpipelines feedback command in the app or on the Azure DevOps feedback portal

      The post Announcing launch of Azure Pipelines app for Slack appeared first on Azure DevOps Blog.

      February Security Release: Team Foundation Server 2018 Update 3.2 Patch 1 is available

      $
      0
      0

      We announced the Azure DevOps Bounty Program a few weeks ago. We’re excited that this effort has already helped us on our mission to provide the highest level of security for our customers. Thanks to everyone who is participating in the Bounty program.

      We plan to release security updates on the second Tuesday of each month (Patch Tuesday). This will give our customers a predictable and regular cadence that lines up with other security releases from Microsoft. When the updates involve binary changes, our releases will only replace the impacted binaries. If the updates involve database changes, we will release full installations.

      TFS 2018 Update 3.2 Patch 1 Today, we released Team Foundation Server 2018 Update 3.2 Patch 1 that fixes two cross site scripting vulnerabilities found through the Bounty program: – CVE-2019-0742: Cross site scripting (XSS) vulnerability in work items – CVE-2019-0743: Cross site scripting (XSS) vulnerability in pull requests

      TFS 2018 Update 2 and Update 3 are impacted by these vulnerabilities. Azure DevOps Server 2019 RC2 is also impacted and will be fixed in the final release of Azure DevOps Server 2019. We recommend that all customers on TFS 2018 Update 2 or Update 3 upgrade to TFS 2018 Update 3.2 and apply TFS 2018 Update 3.2 Patch 1.

      Verifying Installation To verify if you have this update installed, you can check the versions of the following file: [TFS_INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.WorkItemTracking.Web.dll

      TFS 2018 is installed to c:Program FilesMicrosoft Team Foundation Server 2018 by default.

      After installing TFS 2018 Update 3.2 Patch 1, the version will be 16.131.28605.6.

      The post February Security Release: Team Foundation Server 2018 Update 3.2 Patch 1 is available appeared first on Azure DevOps Blog.

      Moving your Azure Virtual Machines has never been easier!

      $
      0
      0

      To meet customer demand, Azure is continuously expanding. We’ve been adding new Azure regions and introducing new capabilities. As a result, customers want to move their existing virtual machines (VMs) to new regions while adopting the latest capabilities. There are other factors that prompt our customers to relocate their VMs. For example, you may want to do that to increase SLAs.

      In this blog, we will walk you through the steps you need to follow to move your VMs across regions or within the same region.

      Why do customers want to move their Azure IaaS Virtual Machines?

      Some of the most common reasons that prompt our customers to move their virtual machines include:

      •    Geographical proximity: “I deployed my VM in region A and now region B, which is closer to my end users, has become available.”

      •    Mergers and acquisitions: “My organization was acquired, and the new management team wants to consolidate resources and subscriptions into one region.”

      •    Data sovereignty: “My organization is based in the UK with a large local customer base. As a result of Brexit, I need to move my Azure resources from various European regions to the UK in order to comply with local rules and regulations.”

      •    SLA requirements: “I deployed my VMs in Region A, and I would like to get a higher level of confidence regarding the availability of my services by moving my VMs into Availability Zones (AZ). Region A doesn’t have an AZ at the moment. I want to move my VMs to Region B, which is still within my latency limits and has Availability Zones.”

      If you or your organization are going through any of these scenarios or you have a different reason to move your virtual machines, we’ve got you covered!

      Move Azure VMs to a target region

      For any of the scenarios outlined above, if you want to move your Azure Virtual Machines to a different region with the same configuration as the source region or increase your availability SLAs by moving your virtual machines into an Availability Zone, you can use Azure Site Recovery (ASR). We recommend taking the following steps to ensure a successful transition:

      1.    Verify prerequisites: To move your VMs to a target region, there are a few prerequisites we recommend you gather. This ensures that you’re creating a basic understanding of the Azure Site Recovery replication, the components involved, the support matrix, etc.

      2.    Prepare the source VMs: This involves ensuring the network connectivity of your VMs, certificates installed on your VMs, identifying the networking layout of your source and dependent components, etc.

      3.    Prepare the target region: You should have the necessary permissions to create resources in the target region including the resources that are not replicated by Site Recovery. For example, permissions for your subscriptions in the target region, available quota in the target region, Site Recovery’s ability to support replication across the source-target regional pair, pre-creation of load balancers, network security groups (NSGs), key vault, etc.

      4.    Copy data to the target region: Use Azure Site Recovery replication technology to copy data from the source VM to the target region.

      5.    Test the configuration: Once the replication is complete, test the configuration by performing a failover test to a non-production network.

      6.    Perform the move: Once you’re satisfied with the testing and you have verified the configuration, you can initiate the actual move to the target region.

      7.    Discard the resources in the source region: Clean up the resources in the source region and stop replication of data.

      Azure VMs
       

      Move your Azure VM ‘as is’

      If you intend to retain the same source configuration as the target region, you can do so with Azure Site Recovery. Your virtual machine configuration availability SLAs will be the same before and after the move. A single instance VM after the move will come back online as a single instance VM. VMs in an Availability Set after the move will be placed into an Availability Set, and VMs in an Availability Zone will be placed into an Availability Zone within the target region.

      To learn more about the steps to move your VMs, refer to the documentation.
       

      Move your Azure virtual machines to increase availability

      As many of you know, we offer Availability Zones (AZs), a high availability offering that protects your applications and data from datacenter failures. AZs are unique physical locations within an Azure region and are equipped with independent power, cooling, and networking. To ensure resiliency, there’s a minimum of three separate zones in all enabled regions. With AZs, Azure offers 99.99 percent VM uptime SLA.

      You can use Azure Site Recovery to move your single instance VM or VMs in an Availability Set into an Availability Zone, thereby achieving 99.99 percent uptime SLA. You can choose to place your single instance VM or VMs in an Availability Set into Availability Zones when you choose to enable the replication for your VM using Azure Site Recovery. Ideally each VM in an Availability Set should be spread across Availability Zones. The SLA for availability will be 99.99 percent once you complete the move operation. To learn more about the steps to move the VMs and improve your availability, refer to our documentation.
       

      Azure natively provides you with the high availability and reliability you need for your mission-critical workloads, and you can choose to increase your SLAs and meet compliance requirements using the disaster recovery features provided by Azure Site Recovery. You can use the same service to increase availability of the virtual machines you have already deployed as described in this blog. Getting started with Azure Site Recovery is easy – simply check out the pricing information, and sign up for a free Azure trial. You can also visit the Azure Site Recovery forum on  the Microsoft Developer Network (MSDN) for additional information and to engage with other customers.

      Viewing all 5971 articles
      Browse latest View live