Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Migrating your existing on-prem SQL Server database to Azure SQL DB

$
0
0

If you are in the process of moving an existing .NET application to Azure, it’s likely you’ll have to migrate an existing, on-prem SQL database as well. There are a few different ways you can go about this, so let’s go through them.

Data Migration Assistant (downtime required)

The Data Migration Assistant (download | documentation) is free, easy to use, slick and extremely powerful! It can:

  • Evaluate if your database is ready to migrate and produce a readiness report (command line support included)
  • Provide recommendations for how to remediate migration blocking issues
  • Recommend the minimum Azure SQL Database SKU based on performance counter data of your existing database
  • Perform the actual migration of schema, data and objects (server roles, logins, etc.)

After a successful migration, applications will be able to connect to the target SQL server databases seamlessly. There are currently a couple of limitations, but the majority of databases shouldn’t be impacted. If this sounds interesting, check out the full tutorials on how to migrate to Azure SQL DB and how to migrate to Azure SQL DB Managed Instance.

Azure Data Migration Service (no downtime required)

The Azure Data Migration Service allows you to move your on-prem database to Azure without taking it offline during the migration. Applications can keep on running while the migration is taking place. Once the database in Azure is ready you can switch your applications over immediately.

If this sounds interesting, check out the full tutorials on how to migrate to Azure SQL DB and how to migrate to Azure SQL DB Managed Instance without downtime.

SQL Server Management Studio (downtime required)

You are probably already familiar with SQL Server Management Studio (download | documentation), but if you are not it’s basically an IDE for SQL Server built on top of the Visual Studio shell and it’s free! Unlike the Data Migration Assistant, it cannot produce readiness reports nor can it suggest remediating actions, but it can perform the actual migration two different ways.

The first way is by selecting the command “Deploy Database to Microsoft Azure SQL Database…” which will bring up the migration wizard to take you through the process step by step:

The second way is by exporting the existing, on-prem database as a .bacpac file (docs to help with that) and then importing the .backpac file into Azure:

 

Resolving database migration compatibility issues

There are a wide variety of compatibility issues that you might encounter, depending both on the version of SQL Server in the source database and the complexity of the database you are migrating. Use the following resources, in addition to a targeted Internet search using your search engine of choices:

In addition to searching the Internet and using these resources, use the MSDN SQL Server community forums or StackOverflow. If you have any questions or problems just leave us a comment below.

 


Azure.Source – Volume 70

$
0
0

Now in preview

Anomaly detection using built-in machine learning models in Azure Stream Analytics

Many customers use Azure Stream Analytics to continuously monitor massive amounts of fast-moving streams of data to detect issues that do not conform to expected patterns and prevent catastrophic losses. This in essence is anomaly detection. Built-in machine learning models for anomaly detection in Azure Stream Analytics significantly reduces the complexity and costs associated with building and training machine learning models. This feature is now available for public preview worldwide. See how to use simple SQL language to author and deploy powerful analytics processing logic that can scale-up and scale-out to deliver insights with millisecond latencies.

Anomaly detection using machine learning in Azure Stream Analytics

Update 19.02 for Azure Sphere public preview now available

Device builders can now bring the security of Azure Sphere to products even faster than ever before. The Azure Sphere 19.02 release is now available in preview and focuses on broader enablement of device capabilities, reducing time to market with new reference solutions, and continuing to prioritize features based on feedback from organizations building with Azure Sphere. To build applications that leverage this new functionality, you will need to ensure that you have installed the latest Azure Sphere SDK Preview for Visual Studio. All Wi-Fi connected devices will automatically receive an updated Azure Sphere OS.

Also in preview

Now generally available

Actuating mobility in the enterprise with new Azure Maps services and SDKs

The mobility space is at the forefront of the most complex challenges faced by cities and urban areas today. Azure Maps has introduced new SDKs and cloud-based services to equip Azure customers with the tools necessary to make smarter, faster, more informed decisions and to enable enterprises, partners, and cities to build the solutions that help visualize, analyze, and optimize mobility challenges all while getting the benefits of a rich set of maps and mapping services with the fastest map data refresh rate available.

Three screenshots of Azure Maps in a mobile device

Also generally available

News and updates

Get started quickly using templates in Azure Data Factory

Cloud data integration helps organizations integrate data of various forms and unify complex processes in a hybrid data environment. A number of times different organizations have similar data integration needs and require repeat business processes. Templates in Azure Data Factory help you get started quickly with building data factory pipelines and improve your productivity along with reducing development time for repeat processes. The templates are available in a Template gallery that contains use-case based templates, data movement templates, SSIS templates or transformation templates that you can use to get hands-on with building your data factory pipelines.

Quickly build data integration pipelines using templates in Azure Data Factory

Azure IoT Hub Java SDK officially supports Android Things platform

Connectivity is often the first challenge in the Internet of Things (IoT) world. That’s why we released Azure IoT SDKs to enable building IoT applications that interact with IoT Hub and the IoT Hub Device Provisioning Service. These SDKs cover most popular languages in IoT development including C, .NET, Java, Python, and Node.js, as well as popular platforms like Windows, Linux, OSX, and MBED all with support for iOS and Android for enabling mobile IoT scenarios. We are happy to share that the Azure IoT Hub Java SDK now officially supports the Android Things platform so developers can leverage the operating system on the device side, while using Azure IoT Hub as the central message hub that scales to millions of simultaneously connected devices.

Azure IoT Edge runtime available for Ubuntu virtual machines

Azure IoT Edge is a fully managed service allowing you to deploy Azure and third-party services to run directly on IoT devices, whether they are cloud-connected or offline, and offers functionality ranging from connectivity to analytics to storage all while allowing you to deploy modules entirely from the Azure portal without writing any code. Azure IoT Edge deployments are built to scale so that you can deploy globally to any number of devices or to simulate the workload with virtual devices. Now generally available, open-source Azure IoT Edge runtime preinstalled on Ubuntu virtual machines.

Azure IoT Edge VM on Azure Marketplace

Monitor at scale in Azure Monitor with multi-resource metric alerts

Customers rely on Azure to run large scale applications and services critical to their business. To run services at scale, you need to setup alerts to proactively detect, notify, and remediate issues before it affects your customers We’ve just released multi-resource support for metric alerts in Azure Monitor to help you set up critical alerts at scale. Learn about metric alerts in Azure Monitor that work on a host of multi-dimensional platforms and custom metrics; and can notify you when the metric breaches a defined threshold. *This functionality is currently only supported for virtual machines with support for other resource types coming soon.

How Azure Security Center helps you protect your environment from new vulnerabilities

Recently the disclosure of a vulnerability (CVE-2019-5736) was announced in the open-source software (OSS) container runtime, runC allowing an attacker to gain root-level code execution on a host. Azure Security Center can help you detect vulnerable resources in your environment within Microsoft Azure, on-premises, or other clouds. See how Azure Security Center can help you detect that an exploitation has occurred and to alert you.

Announcing launch of Azure Pipelines app for Slack

Use the Azure Pipelines app for Slack to easily monitor the events for your pipelines. Set up and manage subscriptions for completed builds, releases, pending approvals (and more) then get notifications for these events in your Slack channels.

The February release of Azure Data Studio is now available

Azure Data Studio is a new cross-platform desktop environment for data professionals using the family of on-premises and cloud data platforms on Windows, MacOS, and Linux. The February release of Azure Data Studio (formerly known as SQL Operations Studio) is now in general availability. New features include: A new Admin Pack for SQL Server, Added Profiler filtering, Added Save as XML, Added Data-Tier Application Wizard improvements, Updates to the SQL Server 2019 Preview extension, Results streaming turned on by default, and important bug fixes.

Additional news and updates

Technical content

Controlling costs in Azure Data Explorer using down-sampling and aggregation

Azure Data Explorer (ADX) is an outstanding service for continuous ingestion and storage of high velocity telemetry data from cloud services and IoT devices. In this helpful post, we see how ADX users can take advantage of stored functions, the Microsoft Flow Azure Kusto connector, and how to create and update tables with filtered, down-sampled, and aggregated data for controlling storage costs.

Azure Stack IaaS – part one

Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform and has created a lot of excitement around new hybrid application patterns, consistent Azure APIs to simplify DevOps practices and processes, the extensive Azure ecosystem available through the Marketplace, and the option to run Azure PaaS Services locally, such as App Services and IoT Hub. Underlying all of these are some exciting IaaS capabilities that this new Azure Stack IaaS blog series outlines.

Benefits of using Azure API Management with microservices

The IT industry is experiencing a shift from monolithic applications to microservices-based architectures. The benefits of this new approach include independent development and the freedom to choose technology, independent deployment and release cycle, individual microservices that can scale independently, and reducing the overall cost while increasing reliability. Azure API Management is now available in a new pricing tier with billing per execution especially suited for microservice-based architectures and event-driven systems. Explore how to design a simplified online store system, why and how to manage public facing APIs in microservice-based architectures, and how to get started with Azure API Management and microservices.

Flowchart showing the fronting of Azure API Managemnet to Azure Functions

Maximize throughput with repartitioning in Azure Stream Analytics

Customers love Azure Stream Analytics for its ease of analyzing streams of data in movement, with the ability to set up a running pipeline within five minutes. Optimizing throughput has always been a challenge when trying to achieve high performance in a scenario that can't be fully parallelized. You can now use a new extension of Azure Stream Analytics SQL to specify the number of partitions of a stream when reshuffling the data. This new capability unlocks performance and aids in maximizing throughput in such scenarios. The new extension of Azure Stream Analytics SQL includes a keyword INTO that allows you to specify the number of partitions for a stream when performing reshuffling using a PARTITION BY statement. This new keyword, and the functionality it provides, is a key feature to achieve high performance throughput for the above scenarios, as well as to better control the data streams after a shuffle

Moving your Azure Virtual Machines has never been easier!

Because of geographical proximity, a merger or acquisition, data sovereignty, or SLA requirements, we are often approached by customers who want to change the region in which their Azure virtual machine is currently deployed to another target region. To meet this need, Azure is continuously expanding; adding new Azure regions, and introducing new capabilities. Walk through the steps you need to move your virtual machine as is (or to increase availability), across regions.

Flowchart outlining the 7 steps to ensure a successful VM transition

Protect Azure Virtual Machines using storage spaces direct with Azure Site Recovery

We all need to protect our business-critical applications. Storage spaces direct (S2D) lets you host a guest cluster on Microsoft Azure which is especially useful in scenarios where virtual machines (VMs) are hosting those critical applications like SQL, Scale out file server, or SAP ASCS. Learn how the Azure Site Recovery support of storage spaces direct allows you to take your higher availability application and make it more resilient by providing a protection against region level failure. Disaster recovery between Azure regions is available in all Azure regions where ASR is available.  This feature is only available for Azure Virtual Machines’ disaster recovery.

Under the hood: Performance, scale, security for cloud analytics with ADLS Gen2

Since we announced the general availability of Azure Data Lake Storage (ADLS) Gen2, Azure has become the only cloud storage service that is purpose-built for big data analytics and is designed to integrate with a broad range of analytics frameworks enabling a true enterprise data lake, maximize performance via true filesystem semantics, scales to meet the needs of the most demanding analytics workloads, is priced at cloud object storage rates, and is flexible to support a broad range of workloads so that you are not required to create silos for your data. Take a closer look at the technical foundation of ADLS that will power the end-to-end analytics scenarios our customers demand.

Build a Node.js App with the npm Module for Azure Cosmos DB

Ever wondered what it's like to try the JavaScript SDK to manage Azure Cosmos DB SQL API data? Follow along with John Papa as he walks viewers through the Microsoft quickstart guide, and you'll be able to use the SDK in under six minutes!

Thumbnail from Build a Node.js App with the npm Module for Azure Cosmos DB

Keep Calm, and Keep Coding with Cosmos and Node.js

John Papa digs into the Azure Cosmos DB SDK for Node.js to discover how good it feels when an SDK is fast to install, fast to learn, and fast to execute.

Real Talk JavaScript podcast - Episode 17: Azure Functions & Serverless with Jeff Hollan

Jeff Hollan, Senior Program Manager for Microsoft Azure Functions, joins John to talk about serverless and talks about his serverless doorbell project.

5 Azure Offerings I ❤️For Xamarin Development

It’s no secret that Matt is both an Azure and Xamarin fan. In this post, he rounds up five Azure offerings that are great for Xamarin development—Azure AD (B2C), Azure Key Vault, Azure Functions, Azure Custom Vision API, and Azure App Center—all of which can be accessed with a free Azure account!

Docker from the beginning

In this first part, Chris looks at the basics of containers and gives some hands-on advice and sample code to get the reader started. Future parts will explain the story of Docker including a look at Kubernetes and our own AKS service. And in part two, Chris looks at the Docker Volumes and how they can make for a great Developer Experience. Future parts will explain the story of Docker including a look at Kubernetes and our own AKS service.

Migrating Azure Functions f1 (.NET) to v2 (.NET Core/Standard)

In this post, Jeremy shared the lessons he learned upgrading his serverless link shortener app to the new Azure Functions platform.

Prototyping your first cloud-connected IoT project using an MXChip board and Azure IoT hub

Learn how to quickly build a prototype IoT project using Azure IoT Hub. Jim's blog post gives full instructions on how to get started with the MXChip board using Visual Studio Code, what Azure IoT Hub is, how to send messages and how to use device twins powered by Azure Functions to sync data.

Photo of Jim Bennett's Internet-powered fan

Kubernetes Basics

In this miniseries, Microsoft Distinguished Engineer and Kubernetes co-creator, Brendan Burns provides foundational knowledge to help you understand Kubernetes and how it works.

AZ 203 Developing Solutions for Microsoft Azure Study Guide

It's essential to be knowledgeable of how the cloud can bring the best value to the developer. App Dev Manager Isaac Levin shares some tips about how to best prepare for the Microsoft Certified Azure Developer Associate certification (AZ-203) exam.

Azure shows

Episode 266 - Azure Kubernetes Service | The Azure Podcast

The dynamic Sean McKenna, Lead PM for AKS, gives us all the details about the service and why and when you should use it for your cloud compute needs. Russell and Kendall get together with him @ Microsoft Ready for a great show.

HashiCorp Vault on Azure | Azure Friday

Working with Microsoft, HashiCorp launched Vault with a number of features to make secret management easier to automate in Azure cloud. Yoko Hyakuna from HashiCorp joins Donovan Brown to show how Azure Key Vault can auto-unseal the HashiCorp Vault server, and then how HashiCorp Vault can dynamically generate Azure credentials for apps using its Azure secrets engine feature.

Using HashiCorp Vault with Azure Kubernetes Service (AKS) | Azure Friday

As the adoption of Kubernetes grows, secret management tools must integrate well with Kubernetes so that the sensitive data can be protected in the containerized world. On this episode, Yoko Hakuna demonstrates the HashiCorp Vault's Kubernetes auth method for identifying the validity of containers requesting access to the secrets.

Azure Instance Metadata Service updates for attested data | Azure Friday

Azure Instance Metadata Service is used to provide information about a running virtual machine that can be used to configure and manage the machine. With the latest updates, Azure Marketplace vendors can now validate that their image is running in Azure.

Ethereum Name Service | Block Talk

This session provides an overview of the Ethereum Name Service and the core features that are included.  We then show a demonstration of how this service can be useful when building decentralized applications.

Introducing Spatial operations for Azure Maps | Internet of Things Show

The ability to analyze data is a core facet of the Internet of Things. Azure Maps Spatial Operations will take location information and analyze it on the fly to help inform our customers of ongoing events happening in time and space. The Spatial Operations we are launching consist of Geofencing, Buffer, Closest Point, Great Circle Distance and Point in Polygon. We will demonstrate geofencing capabilities, how to associate fences with temporal constraints so that fences are evaluated only when relevant, and how to react to Geofence events with Event Grid. Finally, we will talk about how other spatial operations can support geofencing and other scenarios.

Building Applications from Scratch with Azure and Cognitive Services | On .NET

In this episode, Christos Matskas joins us to share the story of an interesting application he built using the Azure SDKs for .NET and Cognitive Services. We not only get an overview of creating custom vision models, but also a demo of the docker containers for cognitive services. Christos also shares how he was able to leverage .NET standard libraries to maximize code portability and re-use.

Open Source Security Best Practices for Developers, Contributors, and Maintainers | The Open Source Show

Armon Dadgar, HashiCorp CTO and co-founder, and Aaron Schlesinger talk about how and why HashiCorp Vault is a security and open source product: two things traditionally considered at odds. You'll learn how to avoid secret sprawl and protect your apps' data, ways for contributors and maintainers to enhance the security of any project, and why you should trust no one (including yourself).

Overview of Open Source DevOps in Azure Government | Azure Government

In this episode of the Azure Government video series, Steve Michelotti talks with Harshal Dharia, Cloud Solution Architect at Microsoft, about open source DevOps in Azure Government. Having a reliable and secure DevOps pipeline is one of the most important factors to a successful development project. However, different organizations and agencies often have different tools for DevOps. Harshal starts out by discussing various DevOps tools available, and specifically focuses this demo-heavy talk on open source DevOps tools in Azure Government. Harshal then shows how Terraform and Jenkins can be used in a robust CI/CD pipeline with other open source tools. For DevOps, your organization or agency can bring all your favorite open source tools and use them, but from within the highly scalable, reliable, and secure environment of Azure Government. If you’re into open source DevOps, this is the video for you!

Thumbnail from Overview of Open Source DevOps in Azure Government

How to create an Azure Functions project with Visual Studio Code | Azure Tips and Tricks

In this edition of Azure Tips and Tricks, learn how to create an Azure Functions project with Visual Studio Code. To start working with Azure Functions, make sure the "Azure Functions" extension is installed inside of Visual Studio Code.

Thumbnail from How to create an Azure Functions project with Visual Studio Code

How to manage virtual machines on the go via the Azure mobile app | Azure Portal Series

Managing your Azure virtual machines while you’re on the go is easy using the Azure mobile app. In this video of the Azure Portal "How To" series, learn how to use the Azure mobile app to monitor, manage, and stay connected to your Azure virtual machines.

Thumbnail from How to manage virtual machines on the go via the Azure mobile app

How to monitor your Kubernetes clusters | Kubernetes Best Practices Series

Get best practices on how to monitor your Kubernetes clusters from field experts in this episode of the Kubernetes Best Practices Series. In this intermediate level deep dive, you will learn about monitoring and logging in Kubernetes from Dennis Zielke, Technology Solutions Professional in the Global Black Belts Cloud Native Applications team at Microsoft.

Thumbnail from How to monitor your Kubernetes clusters

Simon Timms on Azure Functions and Processes - Episode 23 | The Azure DevOps Podcast

Simon Timms is a long-time freelance Software Engineer, multi-time Microsoft MVP co-host of ASP.NET Monsters on Channel 9, and also runs the Function Junction YouTube channel. He considers himself a generalist with a history of working in a diverse range of industries. He’s personally interested in A.I., DevOps, and microservices; and skilled in Software as a Service (SaaS), .NET Framework, Continuous Integration, C#, and JavaScript. He’s also written two books with Packt Publishing: Social Data Visualization with HTML5 and JavaScript and Mastering JavaScript Design Patterns. In this week’s episode, Simon and Jeffrey will be discussing Azure Functions and running processes in Azure. Simon explains how the internal model of Azure Functions works, the difference between Azure Functions and Durable Functions, the benefits and barriers to Azure Functions, and much, much more.

Events

Learn how to build with Azure IoT: Upcoming IoT Deep Dive events

Microsoft IoT Show, the place to go to hear about the latest announcements, tech talks, and technical demos, is starting a new interactive, live-streaming event and technical video series called IoT Deep Dive.

Join us in Seattle from May 6-8 for Microsoft Build

Join us in Seattle for Microsoft’s premier event for developers. Come and experience the latest developer tools and technologies. Imagine new ways to create software by getting industry insights into the future of software development. Connect with your community to understand new development trends and innovative ways to code. Registration goes live on February 27.

Promotional graphic for the Microsoft Build event

Customers, partners, and industries

PyTorch on Azure: Deep learning in the oil and gas industry

Drilling for oil and gas is one of the most dangerous jobs on Earth. Workers are exposed to the risk of events ranging from small equipment malfunctions to entire off shore rigs catching on fire.

How to avoid overstocks and understocks with better demand forecasting

Promotional planning and demand forecasting are incredibly complex processes. Something seemingly straight-forward, like planning the weekly flyer, requires answers to thousands of questions involving a multitude of teams deciding which products to promote, and where to position the inventory to maximize sell-through. Explore how Rubikloud’s Price & Promotion Manager enables AI-powered optimization for enterprise retail and allows merchants and supply chain professionals to take a holistic approach to integrated forecasting and replenishment.


Ignite: The Tour was in Australia last week, which is home to A Cloud Guru. Check out this three-part report from Lars Klint.

Azure This Week - Ignite Special - 12 February 2019 | A Cloud Guru – Azure This Week

Lars reports from an exclusive invite-only tour of the Microsoft Quantum research facility at Sydney University.

Thumbnail from Azure This Week - Ignite Special - 12 February 2019

Azure This Week - Ignite Special - 13 February 2019 | A Cloud Guru – Azure This Week

On this special edition episode of Azure This Week, Lars talks to Anthony Chu, Christina Warren, Jason Hand and looks at the new support for Azure SQL Database in Azure Stream Analytics.

Thumbnail from Azure This Week - Ignite Special - 13 February 2019

Azure This Week - Ignite Special - 14 February 2019 | A Cloud Guru – Azure This Week

On day 2 of Microsoft Ignite | The Tour | Sydney Lars talks Azure security with Damian Brady, Tanya Janca, Orin Thomas and Troy Hunt.

Thumbnail from Azure This Week - Ignite Special - 14 February 2019

Exploring nopCommerce – open source e-commerce shopping cart platform in .NET Core

$
0
0

nopCommerce demo siteI've been exploring nopCommerce. It's an open source e-commerce shopping cart. I spoke at their conference in New York a few years ago and they were considering moving to open source and cross-platform .NET Core from the Windows-only .NET Framework, so I figured it was time for me to check in on their progress.

I headed over to https://github.com/nopSolutions/nopCommerce and cloned the repo. I have .NET Core 2.2 installed that I grabbed here. You can check out their official site and their live demo store.

It was a simple git clone and a "dotnet build" and it build and ran quite immediately. It's always nice to have a site "just work" after a clone (it's kind of a low bar, but no matter what the language it's always a joy when it works.)

I have SQL Express installed but I could just as easily use SQL Server for Linux running under Docker. I used the standard SQL Server Express connection string: "Server=localhostSQLEXPRESS;Database=master;Trusted_Connection=True;" and was off and running.

nopCommerce is easy to setup

It's got a very complete /admin page with lots of Commerce-specific reports, the ability to edit the catalog, have sales, manage customers, deal with product reviews, set promotions, and more. It's like WordPress for Stores. Everything you'd need to put up a store in a few hours.

Very nice admin site in nopCommerce

nopCommerce has a very rich plugin marketplace. Basically anything you'd need is there but you could always write your own in .NET Core. For example, if I want to add Paypal as a payment option, there's 30 plugins to choose from!

NOTE: If you have any theming issues (css not showing up) with just using "dotnet build," you can try "msbuild" or opening the SLN in Visual Studio Community 2017 or newer. You may be seeing folders for plugins and themes not being copied over with dotnet build. Or you can "dotnet publish" and run from the publish folder.

Now, to be clear, I just literally cloned the HEAD of the actively developed version and had no problems, but you might want to use the most stable version from 2018 depending on your needs. Either way, nopCommerce is a clean code base that's working great on .NET Core. The community is VERY active, and there's a company behind the open source version that can do the work for you, customize, service, and support.


Sponsor: The next generation of Jira has arrived, with new roadmaps, more flexible boards, overhauled configuration, and dozens of new integrations. Whatever new awaits you, begin it here. In a new Jira.


© 2018 Scott Hanselman. All rights reserved.
     

Language Server Index Format

More reliable event-driven applications in Azure with an updated Event Grid

$
0
0

We have been incredibly excited to be a part of the rise of event-driven programming as a core building block for cloud application architecture. By making the following features generally available, we want to enable you to build more sophisticated, performant, and stable event-driven applications in Azure. We are proud to announce the general availability of the following set of features, previously in preview:

  1. Dead lettering
  2. Retry policies
  3. Storage Queues as a destination
  4. Hybrid Connections as a destination
  5. Manual Validation Handshake

To take advantage of the GA status of the features, make sure you are using our 2019-01-01 API and SDKs. If you are using the Azure portal or CloudShell, you’re already good to go. If you are using CLI or PowerShell, make sure you have versions 2.0.56 or later for CLI and 1.1.0 for PowerShell.

Dead lettering

Dead lettering gives you an at-least-once guarantee that you will receive your events in mission critical systems. With a dead letter destination set, you will never lose a message even if your event handler is down, your authorization fails, or a bug in your endpoint is overwhelmed with volume.

Dead lettering allows you to connect each event subscription to a storage account, so that if your primary event pipeline fails, Azure Event Grid can deliver those events to a storage account for consumption at any time.

Retry policies

Retry policies make your primary eventing pipeline more robust in the event of ephemeral failures. While dead lettering provides you with a backstop in case there are long lasting failures in your system, it is more common to see only temporary outages in distributed systems.

Configuring retry policies allows you to set how many times, or for how long you would like an event to be retried before it is dead lettered or dropped. Sometimes, you may want to keep retrying an event as long as possible regardless of how late it is. Other times, once an event is stale, it has no value, so you want it dropped immediately. Retry policies let you choose the delivery schedule that works best for you.

Storage Queues as a destination

Event Grid can directly push your events to an Azure Storage Queue. Queues can be a powerful event handler when you need to buffer your ingress of events to your event handler to allow it to properly scale up. Similarly, if your event handler can’t guarantee uptime, putting a storage queue in between allows you to hold those events and process them when your event handler is ready.

Storage queues also have virtual network (VNet) integration which allows for VNet injection of Event Grid events. If you need to connect an event source to an event handler that is within a VNet, you can tell Event Grid to publish to a storage queue and then consume events in your VNet via your queue.

Hybrid connections as a destination

If you want to build and debug locally while connected to cloud resources for an event, have an on-premises service that can’t expose an HTTP endpoint, or need to work from behind a locked down firewall, Hybrid connections allows you to connect those resources to Event Grid.

Hybrid connections as an event handler gives you an HTTP endpoint to connect Event Grid to. It also gives you option to make an outbound WebSocket connection from your local resource to the same hybrid connection instance. The hybrid connection will then relay your incoming events from event grid to your on-premises resource.

Manual validation handshake

Not all event handlers can customize their HTTP response in order to provide endpoint proof of ownership. The manual validation handshake makes it as easy as copy paste to prove you are an authorized owner of an endpoint.

When you register an Event Grid subscription, a validation event will be sent to the endpoint with a validation code. You are still able to respond to the validation event by echoing back the validation code, however, if that is not convenient, you can now copy and paste the validation URL included from the event to any browser to validate the endpoint. Doing a GET on the endpoint validates proof of ownership.

We hope you react well to this news.

The Azure Event Grid team

.NET Core 1.0 and 1.1 will reach End of Life on June 27, 2019

$
0
0

.NET Core 1.0 was released on June 27, 2016 and .NET Core 1.1 was released on November 16, 2016. As an LTS release, .NET Core 1.0 is supported for three years. .NET Core 1.1 fits into the same support timeframe as .NET Core 1.0. .NET Core 1.0 and 1.1 will reach end of life and go out of support on June 27, 2019, three years after the initial .NET Core 1.0 release.

After June 27, 2019, .NET Core patch updates will no longer include updated packages or container images for .NET Core 1.0 and 1.1. You should plan your upgrade from .NET Core 1.x to .NET Core 2.1 or 2.2 now.

Upgrade to .NET Core 2.1

The supported upgrade path for .NET Core 1.x applications is via .NET Core 2.1 or 2.2. Instructions for upgrading can be found in the following documents:

Note: The migration documents are written for .NET Core 2.0 to 2.1 migration, but equally apply to .NET Core 1.x to 2.1 migration.

.NET Core 2.1 is a long-term support (LTS) release. We recommend that you make .NET Core 2.1 your new standard for .NET Core development, particularly for apps that are not updated often.

.NET Core 2.0 has already reached end-of-life, as of October 1, 2018. It is important to migrate applications to at least .NET Core 2.1.

Microsoft Support Policy

Microsoft has a published support policy for .NET Core. It includes policies for two release types: LTS and Current.

.NET Core 1.0, 1.1 and 2.1 are LTS releases. .NET Core 2.0 and 2.2 are Current releases.

  • LTS releases are designed for long-term support. They included features and components that have been stabilized, requiring few updates over a longer support release lifetime. These releases are a good choice for hosting applications that you do not intend to update.
  • Current releases include new features that may undergo future change based on feedback. These releases are a good choice for applications in active development, giving you access to the latest features and improvements. You need to upgrade to later .NET Core releases more often to stay in support.

Both types of releases receive critical fixes throughout their lifecycle, for security, reliability, or to add support for new operating system versions. You must stay up to date with the latest patches to qualify for support.

The .NET Core Supported OS Lifecycle Policy defines which Windows, macOS and Linux versions are supported for each .NET Core release.

.NET Framework February 2019 Preview of Quality Rollup

$
0
0

Today, we released the February 2019 Preview of Quality Rollup.

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Addresses an issue in System.Threading.Timer where a single global queue that was protected by a single process-wide lock causing a issues with scalability where Timers are used frequently on multi-CPU machine.  The fix can be opted into with an AppContext switch below.  See instructions for enabling the switch.  [750048]
    • Switch name: Switch.System.Threading.UseNetCoreTimer
    • Switch value to enable: true
      Don’t rely on applying the setting programmatically – the switch value is read only once per AppDomain at the time when the System.Threading.Timer type is loaded.

SQL

  • Addresses an issue that caused compatibility breaks seen in some System.Data.SqlClient usage scenarios. [721209]

WPF

  • Improved the memory allocation and cleanup scheduling behavior of the weak-event pattern.   The fix can be opted into with an AppContext switch below.  See instructions for enabling the switch.  [676441]
    • Switch name: Switch.MS.Internal.EnableWeakEventMemoryImprovements
    • Switch name: Switch.MS.Internal.EnableCleanupSchedulingImprovements
    • Switch value to enable: true

 

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10 update 1607, Windows 10 update 1703, Windows 10 update 1709 and Windows update 1803, the .NET Framework updates are part of the Windows 10 Monthly Rollup.

The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version Cumulative Update KB
Windows 10 1803 (April 2018 Update) Catalog
4487029
.NET Framework 3.5, 4.7.2 4487029
Windows 10 1709 (Fall Creators Update) Catalog
4487021
.NET Framework 3.5, 4.7.1, 4.7.2 4487021
Windows 10 1703 (Creators Update) Catalog
4487011
.NET Framework 3.5, 4.7, 4.7.1, 4.7.2 4487011
Windows 10 1607 (Anniversary Update)
Windows Server 2016
Catalog
4487006
.NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 4487006

The following table is for earlier Windows and Windows Server versions.

Product Version Preview of Quality Rollup KB
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2
Catalog
4487258
.NET Framework 3.5 Catalog
4483459
.NET Framework 4.5.2 Catalog
4483453
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4486545
Windows Server 2012 Catalog
4487257
.NET Framework 3.5 Catalog
4483456
.NET Framework 4.5.2 Catalog
4483454
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4486544
Windows 7 SP1
Windows Server 2008 R2 SP1
Catalog
4487256
.NET Framework 3.5.1 Catalog
4483458
.NET Framework 4.5.2 Catalog
4483455
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4486546
Windows Server 2008 Catalog
4487259
.NET Framework 2.0, 3.0 Catalog
4483457
.NET Framework 4.5.2 Catalog
4483455
.NET Framework 4.6 Catalog
4486546

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

.NET Framework February 2019 Preview of Quality Rollup

$
0
0

Today, we released the February 2019 Preview of Quality Rollup.

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

  • Addresses an issue in System.Threading.Timer where a single global queue that was protected by a single process-wide lock causing a issues with scalability where Timers are used frequently on multi-CPU machine.  The fix can be opted into with an AppContext switch below.  See instructions for enabling the switch.  [750048]
    • Switch name: Switch.System.Threading.UseNetCoreTimer
    • Switch value to enable: true
      Don’t rely on applying the setting programmatically – the switch value is read only once per AppDomain at the time when the System.Threading.Timer type is loaded.

SQL

  • Addresses an issue that caused compatibility breaks seen in some System.Data.SqlClient usage scenarios. [721209]

WPF

  • Improved the memory allocation and cleanup scheduling behavior of the weak-event pattern.   The fix can be opted into with an AppContext switch below.  See instructions for enabling the switch.  [676441]
    • Switch name: Switch.MS.Internal.EnableWeakEventMemoryImprovements
    • Switch name: Switch.MS.Internal.EnableCleanupSchedulingImprovements
    • Switch value to enable: true

 

Getting the Update

The Preview of Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10 update 1607, Windows 10 update 1703, Windows 10 update 1709 and Windows update 1803, the .NET Framework updates are part of the Windows 10 Monthly Rollup.

The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version Cumulative Update KB
Windows 10 1803 (April 2018 Update) Catalog
4487029
.NET Framework 3.5, 4.7.2 4487029
Windows 10 1709 (Fall Creators Update) Catalog
4487021
.NET Framework 3.5, 4.7.1, 4.7.2 4487021
Windows 10 1703 (Creators Update) Catalog
4487011
.NET Framework 3.5, 4.7, 4.7.1, 4.7.2 4487011
Windows 10 1607 (Anniversary Update)
Windows Server 2016
Catalog
4487006
.NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 4487006

The following table is for earlier Windows and Windows Server versions.

Product Version Preview of Quality Rollup KB
Windows 8.1
Windows RT 8.1
Windows Server 2012 R2
Catalog
4487258
.NET Framework 3.5 Catalog
4483459
.NET Framework 4.5.2 Catalog
4483453
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4486545
Windows Server 2012 Catalog
4487257
.NET Framework 3.5 Catalog
4483456
.NET Framework 4.5.2 Catalog
4483454
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4486544
Windows 7 SP1
Windows Server 2008 R2 SP1
Catalog
4487256
.NET Framework 3.5.1 Catalog
4483458
.NET Framework 4.5.2 Catalog
4483455
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
4486546
Windows Server 2008 Catalog
4487259
.NET Framework 2.0, 3.0 Catalog
4483457
.NET Framework 4.5.2 Catalog
4483455
.NET Framework 4.6 Catalog
4486546

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

The post .NET Framework February 2019 Preview of Quality Rollup appeared first on .NET Blog.


.NET Framework updates on Microsoft Update Catalog

$
0
0

.NET Framework Rollup updates simplify handling updating systems regardless of the underlying version of .NET Framework present on a system. Rollups present a single (per-OS) update offering on Windows Update (WU) and Windows Server Update Services (WSUS) and Microsoft Update (MU)  Catalog. For IT admins this means significantly less update management for all supported .NET Framework versions since Rollup handle patch applicability automatically on WU or WSUS. There is also a smaller portion of advanced IT admins managing environments that are disconnected from the internet, WU and/or WSUS, with workflows that require downloading .NET version-specific patches for pre-Windows 10 systems.  For this subset of advanced IT admin customers, we are bringing this ability to search and download .NET version-specific updates from the MU Catalog (previously only possible per-OS configuration). We are also taking the opportunity to explain how .NET Framework updates are structured.

<<With the roll out back in 2016 of all-inclusive .NET ‘Rollup’ updates, we simplified handling updating systems regardless of the underlying version of .NET Framework present on the machine by presenting a single update offering on Windows Update (WU) and Windows Server Update Services (WSUS) and Microsoft Update Catalog. With this>>

 

How .NET Framework Rollup updates are structured

As explained in Announcing Cumulative Updates for .NET Framework for Windows 10 October 2018 Update):

  • Pre-Windows 10 operating systems (Windows 8.1, Server 2012 R2 and below):
    • One “parent” OS-level KB update, presented as “2019-01 Security and Quality Rollup for .NET Framework 3.5, 4.5.2, 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2”.
      • Behind the scenes*** this update contains various patches that will only apply based on what version of .NET is on your system(s). These “child” KBs will follow a similar form:
        • Security and Quality Rollup for .NET Framework for 3.5
        • Security and Quality Rollup for .NET Framework for 4.5.2
        • Security and Quality Rollup for .NET Framework for 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 (a single patch shared for these versions)
  • Windows 10 version 1803, Server 2016 and below:
    • One Windows cumulative update that contains all .NET Framework content together with core OS content. No separate/individual updates.**
  • Windows 10 version 1809, Server 2019 and above:
    • One .NET cumulative update, alongside with the Windows cumulative update. A single update with no version-specific break down (carries 3.5 and 4.7.2 updates).
      • [internal] Note that with .NET 4.8 we will have a separate 4.8-specific patch for 1809 (RS5), so this will change to closely mirror pre-Windows 10.

How to retrieve content from Microsoft Update Catalog depending on your scenario

With each new .NET Framework update you can access update specific information, including KBs for each OS and .NET version from the .NET Framework Blog and or from the Microsoft Security Update Guidance, as appropriate and depending on release type.

  • I want to let WSUS/SCCM take advantage of automatic applicability management
    • OS (“parent”)-level updates: You can search the MU Catalog for the regular per-OS KB, as follows:
      •  Search on Catalog steps..
      • Import and deploy from WSUS or SCCM steps.
  • I want to manage .NET version-specific updates manually within my Internet disconnected environment.
    • Search on Catalog for “child” KBs steps..

 

 

The post .NET Framework updates on Microsoft Update Catalog appeared first on .NET Blog.

Class schedules on Azure Lab Services

$
0
0

Classroom labs in Azure Lab Services make it easy to set up labs by handling the creation and management of virtual machines and enabling the infrastructure to scale. Through our continuous enhancements to Azure Lab Services, we are proud share that the latest deployment now includes added support for class schedules.

Schedules management is one of the key features requested by our customers. This feature helps teachers easily create, edit, and delete schedules for their classes. A teacher can set up a recurring or a one-time schedule and provide a start, end date, and time for the class in the time zone of choice. Schedules can be viewed and managed through a simple, easy to use calendar view.

Screenshot of Azure Lab Services scheduling calendar

Students virtual machines are turned on and ready to use when a class schedule starts and will be turned off at the end of the schedule. This feature helps limit the usage of virtual machines to class times only, thereby helping IT admins and teachers manage costs efficiently.

Schedule hours are not counted against quota allotted to a student. Quota is the time limit outside of schedule hours when a student can use the virtual machine.

With schedules, we are also introducing no quota hours. When no quota hours are set for a lab, students can only use their virtual machines during scheduled hours or if the teacher turns on virtual machines for the students to use.

Screenshot of quota per user selection

Students will be able to clearly see when a lab schedule session is in progress on their virtual machines view.

Screenshot of my virtual machines dashboard in Azure Lab Services

You can learn more about how to use schedules in our documentation, “Create and manage schedules for classroom labs in Azure Lab Services.” Please give this feature a try and provide feedback at Azure Lab Services UserVoice forum. If you have a questions, please post it on Stack Overflow.

Update to Azure DevOps Projects support for Azure Kubernetes Service

$
0
0

Kubernetes is going from strength to strength as adoption across the industry continues to grow. But there are still plenty of customers coming to container orchestration for the first time while also building up their familiarity with Docker and containers in general. We see the need to help teams go from a container image, or just a git repo, and help get them to an app running in Kubernetes in as few steps as possible. It’s also important that we do this in a way that will allow them to customize afterward and build on their knowledge as they go.

At Microsoft, we are trying to make it easy for our customers to adopt Kubernetes by offering two solutions.

First is Azure Kubernetes Service (AKS), a fully managed Kubernetes container orchestration service. AKS simplifies the deployment and operations of Kubernetes and enables you to dynamically scale your application infrastructure with confidence and agility.

The other is Azure DevOps Projects, a simplified experience which helps you launch an app on an Azure Service of your choice in a few quick steps. For example, in just a matter of minutes, it can help you provision AKS, Azure Container Registry, and start building and deploying a container app to AKS by using Azure Pipelines. Creating a DevOps Projects provisions Azure resources and comes with a git code repository, Application Insights integration, and a continuous delivery pipeline setup to deploy to Azure. The DevOps Projects dashboard lets you monitor code commits, builds, and deployments from a single view in the Azure portal.

Key benefits of Azure DevOps Projects are:

  • Get up and running with a new app and a CI/CD pipeline in just a few minutes
  • Support for a wide range of popular frameworks such as .NET, Java, PHP, Node.js, and Python
  • Start fresh or bring your own application from GitHub
  • Built-in Application Insights and Azure Monitor for containers integration for instant analytics and actionable insights
  • Cloud-powered CI/CD using Azure DevOps

Several customers are using Azure DevOps Projects to deploy their apps to AKS, but a clear piece of feedback we received from early adopters was to add support for reusing an existing Azure Kubernetes Service (AKS) cluster in Azure DevOps Projects rather than have to create a new one each time.

Today we are happy to share that now you can use Azure DevOps Projects to deploy multiple apps to a single AKS cluster. This feature is generally available in the Azure portal. To get started create an Azure DevOps Projects now. For more information, please read our Azure DevOps Projects documentation.

out

Use GraphQL with Hasura and Azure Database for PostgreSQL

$
0
0

Azure Database for PostgreSQL provides a fully managed, enterprise-ready community PostgreSQL database as a service. The PostgreSQL community edition helps you easily migrate existing apps to the cloud or develop cloud-native applications, using languages and frameworks of your choice. The service offers industry leading innovations such as built-in high availability, backed with 99.99 percent SLA, without the need to set up replicas and enabling customers to save over two times the cost. The capability also allows customers to scale compute up or down in seconds, helping you easily adjust to changes in workload demands.

Additionally, built-in intelligent features such as Query Performance Insight and performance recommendations help customers further lower their total cost of ownership (TCO) by providing customized recommendations and insights to optimize the performance of their Postgres databases. These benefits coupled with unparalleled security and compliance, Microsoft Azure’s industry leading global reach, and Azure IP Advantage, empower customers to focus on their business and applications rather than the database.

As part of the broader Postgres community, our aim is to contribute to and partner with others in the community to bring new features to Azure Database for PostgreSQL users. You can now take advantage of the Hasura GraphQL Engine, a lightweight, high performance open-source product that can instantly provide a real time GraphQL API on a Postgres database. This post provides an overview of how to use GraphQL with Azure Database for PostgreSQL.

What is GraphQL?

GraphQL is a query language for APIs and a server-side runtime for executing database queries. The GraphQL spec is centered around a typed schema that is available to users of the API, which are mostly front-end developers, to make any CRUD queries on the exposed fields. It’s agnostic to the underlying database or source of data. One of GraphQL’s main benefits is that clients can specify exactly what they need from the server and receive that data in a predictable way. GraphQL provides a solution to common hurdles faced when using REST APIs, and it is currently being adopted widely to speed up product development cycles.

Hasura GraphQL Engine

The Hasura GraphQL Engine is a lightweight, high performance open-source product that gives you a real time GraphQL API on a Postgres database instantly. The engine comes with an admin UI to help you explore your GraphQL APIs and manage your database schema and data.

Hasura’s GraphQL Engine also allows you to write custom resolvers with schema-stitching and to integrate serverless functions or microservice APIs that get triggered on database events. With Hasura’s GraphQL, you can easily build 3factor apps. Learn more by reading about Hasura.

Using GraphQL with Azure Database for PostgreSQL

Please note, If you already have a database running on Azure and want to use GraphQL on that database, go directly to the "Working with GraphQL with an existing Azure Database for PostgreSQL" section below.

With the Hasura’s GraphQL one-click deploy, you can now get a real time GraphQL API on Azure with Azure Database for PostgreSQL server in under five minutes!

Get started by selecting Deploy to Azure below, which will open the Azure portal in preparation for using GraphQL with Azure Database for PostgreSQL. If you are prompted to log in to the Azure portal, enter your credentials to continue.

Deploy to Azure button

This deployment uses Azure Container Instances for deploying Hasura’s GraphQL and Azure Database for PostgreSQL for provisioning a managed Postgres instance.

Working with GraphQL with an existing Azure Database for PostgreSQL

If you already have a PostgreSQL database on Azure, you can connect Hasura’s GraphSQL to that database and have GraphQL APIs without affecting any other part of your application.

Get started by selecting the Deploy to Azure graphic below, which will open the Azure portal in preparation for working with GraphQL and an existing Azure Database for PostgreSQL database. If you are prompted to log in to the Azure portal, enter your credentials to continue.

Deploy to Azure button

This deployment uses Azure Container Instances for deploying Hasura and connects to an existing Azure Database for PostgreSQL instance.

What else can you do with Hasura’s GraphQL engine?

Explore the ready to use real-time API

Types and/or operators such as SQL for sorting, filtering, pagination, and aggregations are supported out-of-the-box. Read more about Hasura’s powerful syntax for queries and mutations.

Hasura also has built-in live-queries called subscriptions in GraphQL, for getting real-time updates to results of a query. No need to write any code for handling websocket connections!

Add authorization

Hasura’s granular, role-based permissions system lets you configure columns and row level access control rules for data that integrates with any third party. You can also integrate with your own custom authentication services using a JWT or webhook. You can learn more about adding authorization by visiting Hasura’s documentation, “Authentication / Access control.”

Add custom business logic

GraphQL Engine can be used as a gateway for custom business logic like with Remote GraphQL schemas. You can write your own GraphQL servers in your favorite language and expose them at a single endpoint. Hasura will take care of the schema stitching.

Trigger Azure Functions on database events

Hasura can trigger serverless Azure Functions or webhooks on database events like insert, update, or delete. They can be used to execute asynchronous business logic such as sending a “welcome email” to newly registered users. Read more about triggering serverless Azure Functions and webhooks on the Hasura website, or visit Hasura’s documentation, “Event triggers.”

Next steps

Get started and create your PostgreSQL servers today! Learn more about Azure Database for PostgreSQL in the overview documentation, “What is Azure Database for PostgreSQL.”

Please continue to provide UserVoice feedback on the features and functionality that you want to see next. If you need any help or have questions please check out the, “Azure Database for PostgreSQL documentation.”

For support and feedback related to Hasura, please use Discord. You can also follow Hasura’s product updates at @HasuraHQ.

Acknowledgements

Special thanks to the Hasura team for their contributions to this posting.

Live stream analysis using Video Indexer

$
0
0

Video Indexer is an Azure service designed to extract deep insights from video and audio files offline. This is to analyze a given media file already created in advance. However, for some use cases it's important to get the media insights from a live feed as quick as possible to unlock operational and other use cases pressed in time. For example, such rich metadata on a live stream could be used by content producers to automate TV production, like our example of EndemolShine Group, by journalists of a newsroom to search into live feeds, to build notification services based on content and more.

To that end, I joined forces with Victor Pikula a Cloud Solution Architect at Microsoft, in order to architect and build a solution that allows customers to use Video Indexer in near real-time resolutions on live feeds. The delay in indexing can be as low as four minutes using this solution, depending on the chunks of data being indexed, the input resolution, the type of content and the compute powered used for this process.

Sample player displaying the Video Indexer metedata on the live stream

Figure 1 – Sample player displaying the Video Indexer metadata on the live stream

The stream analysis solution at hand, uses Azure Functions and two Logic Apps to process a live program from a live channel in Azure Media Services with Video Indexer and displays the result with Azure Media Player showing the near real-time resulted stream.

In high level, it is comprised of two main steps. The first step runs every 60 seconds, and takes a sub-clip of the last 60 seconds played, creates an asset from it and indexes it via Video Indexer. Then the second step is called once indexing is complete. The insights captured are processed, sent to Azure Cosmos DB, and the sub-clip indexed is deleted.

The sample player plays the live stream and gets the insights from Azure Cosmos DB, using a dedicated Azure Function. It displays the metadata and thumbnails in sync with the live video.

The two logic apps processing the live stream every minute in the cloud

Figure 2 – The two logic apps processing the live stream every minute in the cloud.

Near real-time indexing for video production

At the EBU Production Technology Seminar in Geneva last month, an end-to-end solution was demonstrated by Microsoft. Several live feeds were ingested to Azure using Dejero technology or the webRTC protocol, and sent to Make.TV Live Video Cloud to switch inputs. The selected input was sent as a transcoded stream to Azure Media Services for multi bitrate transcoding and OTT delivery in low latency mode.  The same stream was also processed in near real time with Video Indexer.

Example of live stream processing in Azure

Figure 3 – Example of live stream processing in Azure

Next steps

The full code and a step-by-step guide to deploy the results can be found in this GitHub project for Live media analytics with Video Indexer. Need near real-time analytics for your content? Now you have a ready-made solution for that, go ahead and give it a try!

Have questions or feedback? We would love to hear from you! Visit our UserVoice to help us prioritize features, or email VISupport@Microsoft.com with any questions.

New studies highlight how AI is transforming employee productivity and accelerating business results

New studies highlight how AI is transforming employee productivity and accelerating business results


Create a CI/CD pipeline for your Azure IoT Edge solution with Azure Pipelines

$
0
0

Modern software moves quickly and demands more from developers than ever. New CI/CD tools can help developers deliver value faster and more transparently, but the need for customized scripts that address different kinds of edge solutions still presents a challenge for some CI/CD pipelines. Now, with the Azure IoT Edge task in Azure Pipelines, developers have an easier way to build and push the modules in different platforms and deliver to a set of Azure IoT Edge devices continuously in the cloud.

Azure IoT Edge is a fully managed service that delivers cloud intelligence locally by deploying and running AI, Azure services, and custom logic directly on cross-platform IoT devices. An Edge solution contains one or more modules, which are hosted as docker images and run in docker container on Edge device. In order for an Edge solution to be applied to target Edge device(s), a deployment needs to be created in Azure IoT Hub. To try this out, visit our Quick Start documentation.

The Azure IoT Edge task in Azure Pipelines provides Build module images and Push module images task for continuous integration and Deploy to IoT Edge devices task for continuous delivery.

  • For Build module images and Push module images, you can specify which modules to build and the target container registry for the docker images
  • For Deploy to IoT Edge devices, you will set Azure IoT Hub as the target of deployment and configure other parameters (priority, target condition).

You can combine the above tasks freely. If you have different container registries for modules, you can add several Push module images tasks. If you only need to verify the success of docker build, you can use Build module images task. If your Edge solution is targeted on different platforms (amd64/windows-amd64/arm32v7), you can create multiple set of CI/CD pipelines with build agents in different platforms.

In continuous delivery, you can use the powerful stages management in Azure release pipeline. It provides convenience for you to manage different environments in deployment (QA/Production/… as the below image describes).

Visit our Quick Start documentation today to get started with creating Azure Pipelines with Azure IoT Edge task! We would love to know your suggestions for this task, feel free to provide your feedback via the Stackoverflow

The post Create a CI/CD pipeline for your Azure IoT Edge solution with Azure Pipelines appeared first on Azure DevOps Blog.

Announcing TypeScript 3.2

$
0
0
TypeScript 3.2 is here today!

If you’re unfamiliar with TypeScript, it’s a language that brings static type-checking to JavaScript so that you can catch issues before you even run your code – or before you even save your file. It also includes the latest JavaScript features from the ECMAScript standard on older browsers and runtimes by compiling those features into a form that they understand. But beyond type-checking and compiling your code, TypeScript also provides tooling in your favorite editor so that you can jump to the definition of any variable, find who’s using a given function, and automate refactorings and fixes to common problems. TypeScript even provides this for JavaScript users (and can also type-check JavaScript code typed with JSDoc), so if you’ve used editors like Visual Studio or Visual Studio Code on a .js file, TypeScript is powering that experience.

To get started with the language itself, check out typescriptlang.org to learn more.

But if you want to try TypeScript 3.2 out now, you can get it through NuGet or via npm by running

npm install -g typescript

You can also get editor support for

Other editors may have different update schedules, but should all have TypeScript available soon.

We have some important information below for NuGet users and Visual Studio 2015 users, so please continue reading if you use either product.

Below we have a bit about what’s new in 3.2.

strictBindCallApply

As you might’ve guessed from the title of this section, TypeScript 3.2 introduces stricter checking for bind, call, and apply. But what does that mean?

Well, in JavaScript, bind, call, and apply are methods on functions that allow us to do things like bind this and partially apply arguments, call functions with a different value for this, and call functions with an array for their arguments.

Unfortunately, in its earlier days, TypeScript lacked the power to model these functions, and bind, call, and apply were all typed to take any number of arguments and returned any. Additionally, ES2015’s arrow functions and rest/spread arguments gave us a new syntax that made it easier to express what some of these methods do – and in a more efficient way as well.

Still, demand to model these patterns in a type-safe way led us to revisit this problem recently. We realized that two features opened up the right abstractions to accurately type bind, call, and apply without any hard-coding:

  1. this parameter types from TypeScript 2.0
  2. Modeling parameter lists with tuple types from TypeScript 3.0

Combined, the two of of them can ensure our uses of bind, call, and apply are more strictly checked when we use a new flag called strictBindCallApply. When using this new flag, the methods on callable objects are described by a new global type called CallableFunction which declares stricter versions of the signatures for bind, call, and apply. Similarly, any methods on constructable (but not callable) objects are described by a new global type called NewableFunction.

As an example, we can look at how Function.prototype.apply acts under this behavior:

function foo(a: number, b: string): string {
    return a + b;
}

let a = foo.apply(undefined, [10]);              // error: too few argumnts
let b = foo.apply(undefined, [10, 20]);          // error: 2nd argument is a number
let c = foo.apply(undefined, [10, "hello", 30]); // error: too many arguments
let d = foo.apply(undefined, [10, "hello"]);     // okay! returns a string

Needless to say, whether you do any sophisticated metaprogramming, or you use simple patterns like binding methods in your class instances (this.foo = this.foo.bind(this)), this feature can help catch a lot of bugs. For more details, you can check out the original pull request here.

Caveats

One caveat of this new functionality is that due to certain limitations, bind, call, and apply can’t yet fully model generic functions or functions that have overloads. When using these methods on a generic function, type parameters will be substituted with the empty object type ({}), and when used on a function with overloads, only the last overload will ever be modeled.

Object spread on generic types

JavaScript supports a handy way of copying existing properties from an existing object into a new one called “spreads”. To spread an existing object into a new object, you define an element with three consecutive periods (...) like so:

let person = { name: "Daniel", location: "New York City" };

// My secret revealed, I have two clones!
let shallowCopyOfPerson = { ...person };
let shallowCopyOfPersonWithDifferentLocation = { ...person, location: "Seattle" };

TypeScript does a pretty good job here when it has enough information about the type. The type system closely tries to model the behavior of spreads and overwrites new properties, tries to ignore methods, etc. But unfortunately up until now it wouldn’t work with generics at all.

function merge<T, U>(x: T, y: U) {
    // Previously an error!
    return { ...x, ...y };
}

This was an error because we had no way to express the return type of merge. There was no syntax (nor semantics) that could express two unknown types being spread into a new one.

We could have come up with a new concept in the type system called an “object spread type”, and in fact we had a proposal for exactly that. Essentially this would be a new type operator that looks like { ...T, ...U } to reflect the syntax of an object spread.
When both T and U are known, that type would flatten down to some new object type.

However, this is pretty complex and requires adding new rules to type relationships and inference. While we explored several different avenues, we recently arrived at two conclusions:

  1. For most uses of spreads in JavaScript, users were fine modeling the behavior with intersection types (i.e. Foo & Bar).
  2. Object.assign – a function that exhibits most of the behavior of spreading objects – is already modeled using intersection types, and we’ve seen very little negative feedback around that.

Given that intersections model the common cases, and that they’re relatively easy to reason about for both users and the type system, TypeScript 3.2 now permits object spreads on generics and models them using intersections:

// Returns 'T & U'
function merge<T, U>(x: T, y: U) {
    return { ...x, ...y };
}

// Returns '{ name: string, age: number, greeting: string } & T'
function foo<T>(obj: T) {
    let person = {
        name: "Daniel",
        age: 26
    };

    return { ...person, greeting: "hello", ...obj };
}

Object rest on generic types

Object rest patterns are sort of the dual of object spreads. Instead of creating a new object with some extra/overridden properties, it creates a new object that lacks some specified properties.

let { x, y, z, ...rest } = obj;

In the above, the most intuitive way to look at this code is that rest copies over all the properties from obj apart from x, y, and z. For the same reason as above, because we didn’t have a good way to describe the type of rest when obj is generic, we didn’t support this for a while.

Here we also considered a new rest operator, but we saw we already had the facilities for describing the above: our Pick and Exclude helper types in lib.d.ts To reiterate, ...rest basically picks off all of the properties on obj except for x, y, and z in the following example:

interface XYZ { x: any; y: any; z: any; }

function dropXYZ<T extends XYZ>(obj: T) {
    let { x, y, z, ...rest } = obj;
    return rest;
}

If we want to consider the properties of T (i.e. keyof T) except for x, y, and z, we can write Exclude<keyof T, "x" | "y" | "z">. We then want to pick those properties back off of the original type T, which gives us

Pick<T, Exclude<keyof T, "x" | "y" | "z">>`.

While it’s not the most beautiful type (hey, I’m no George Clooney myself), we can wrap it in a helper type like DropXYZ:

interface XYZ { x: any; y: any; z: any; }

type DropXYZ<T> = Pick<T, Exclude<keyof T, keyof XYZ>>;

function dropXYZ<T extends XYZ>(obj: T): DropXYZ<T> {
    let { x, y, z, ...rest } = obj;
    return rest;
}

Configuration inheritance via node_modules packages

For a long time TypeScript has supported extending tsconfig.json files using the extends field.

{
    "extends": "../tsconfig-base.json",
    "include": ["./**/*"]
    "compilerOptions": {
        // Override certain options on a project-by-project basis.
        "strictBindCallApply": false,
    }
}

This feature is very useful to avoid duplicating configuration which can easiy fall our of sync, but it really works best when multiple projects are co-located in the same respository so that each project can reference a common “base” tsconfig.json.

But for some teams, projects are written and published as completely independent packages. Those projects don’t have a common file they can reference, so as a workaround, users could create a separate package and reference that:

{
    "extends": "../node_modules/@my-team/tsconfig-base/tsconfig.json",
    "include": ["./**/*"]
    "compilerOptions": {
        // Override certain options on a project-by-project basis.
        "strictBindCallApply": false,
    }
}

However, climbing up parent directories with a series of leading ../s and reaching directly into node_modules to grab a specific file feels unwieldy.

TypeScript 3.2 now resolves tsconfig.jsons from node_modules. When using a bare path for the "extends" field in tsconfig.json, TypeScript will dive into node_modules packages for us.

{
    "extends": "@my-team/tsconfig-base",
    "include": ["./**/*"]
    "compilerOptions": {
        // Override certain options on a project-by-project basis.
        "strictBindCallApply": false,
    }
}

Here, TypeScript will climb up node_modules folders looking for a @my-team/tsconfig-base package. For each of those packages, TypeScript will first check whether package.json contains a "tsconfig" field, and if it does, TypeScript will try to load a configuration file from that field. If neither exists, TypeScript will try to read from a tsconfig.json at the root. This is similar to the lookup process for .js files in packages that Node uses, and the .d.ts lookup process that TypeScript already uses.

This feature can be extremely useful for bigger organizations, or projects with lots of distributed dependencies.

Diagnosing tsconfig.json with --showConfig

tsc, the TypeScript compiler, supports a new flag called --showConfig. When running tsc --showConfig, TypeScript will calculate the effective tsconfig.json (after calculating options inherited from the extends field) and print that out. This can be useful for diagnosing configuration issues in general.

BigInt

BigInts are part of an upcoming proposal in ECMAScript that allow us to model theoretically arbitrarily large integers. TypeScript 3.2 brings type-checking for BigInts, as well as support for emitting BigInt literals when targeting esnext.

BigInt support in TypeScript introduces a new primitive type called the bigint (all lowercase). You can get a bigint by calling the BigInt() function or by writing out a BigInt literal by adding an n to the end of any integer numeric literal:

let foo: bigint = BigInt(100); // the BigInt function
let bar: bigint = 100n;        // a BigInt literal

// *Slaps roof of fibonacci function*
// This bad boy returns ints that can get *so* big!
function fibonacci(n: bigint) {
    let result = 1n;
    for (let last = 0n, i = 0n; i < n; i++) {
        const current = result;
        result += last;
        last = current;
    }
    return result;
}

fibonacci(10000n)

While you might imagine close interaction between number and bigint, the two are separate domains.

declare let foo: number;
declare let bar: bigint;

foo = bar; // error: Type 'bigint' is not assignable to type 'number'.
bar = foo; // error: Type 'number' is not assignable to type 'bigint'.

As specified in ECMAScript, mixing numbers and bigints in arithmetic operations is an error. You’ll have to explicitly convert values to BigInts.

console.log(3.141592 * 10000n);     // error
console.log(3145 * 10n);            // error
console.log(BigInt(3145) * 10n);    // okay!

Also important to note is that bigints produce a new string when using the typeof operator: the string "bigint". Thus, TypeScript correctly narrows using typeof as you’d expect.

function whatKindOfNumberIsIt(x: number | bigint) {
    if (typeof x === "bigint") {
        console.log("'x' is a bigint!");
    }
    else {
        console.log("'x' is a floating-point number");
    }
}

We’d like to extend a huge thanks to Caleb Sander for all the work on this feature. We’re grateful for the contribution, and we’re sure our users are too!

Caveats

As we mentioned, BigInt support is only available for the esnext target. It may not be obvious, but because BigInts have different behavior for mathematical operators like +, -, *, etc., providing functionality for older targets where the feature doesn’t exist (like es2017 and below) would involve rewriting each of these operations. TypeScript would need to dispatch to the correct behavior depending on the type, and so every addition, string concatenation, multiplication, etc. would involve a function call.

For that reason, we have no immediate plans to provide downleveling support. On the bright side, Node 11 and newer versions of Chrome already support this feature, so you’ll be able to use BigInts there when targeting esnext.

Certain targets may include a polyfill or BigInt-like runtime object. For those purposes you may want to add esnext.bigint to the lib setting in your compiler options.

Object.defineProperty declarations in JavaScript

When writing in JavaScript files (using allowJs), TypeScript now recognizes declarations that use Object.defineProperty. This means you’ll get better completions, and stronger type-checking when enabling type-checking in JavaScript files (by turning on the checkJs option or adding a // @ts-check comment to the top of your file).

// @ts-check

let obj = {};
Object.defineProperty(obj, "x", { value: "hello", writable: false });

obj.x.toLowercase();
//    ~~~~~~~~~~~
//    error:
//     Property 'toLowercase' does not exist on type 'string'.
//     Did you mean 'toLowerCase'?

obj.x = "world";
//  ~
//  error:
//   Cannot assign to 'x' because it is a read-only property.

Error message improvements

We’re continuing to push improvements in the error experience in TypeScript. Here’s a few things in TypeScript 3.2 that we believe will make the language easier to use.

Thanks to Kingwl, a-tarasyuk, and prateekgoel who helped out on some of these improvements.

Improved narrowing for tagged unions

TypeScript 3.2 makes narrowing easier by relaxing rules for what’s considered a discriminant property. Common properties of unions are now considered discriminants as long as they contain some singleton type (e.g. a string literal, null, or undefined), and they contain no generics.

As a result, TypeScript 3.2 considers the error property in the following example to be a discriminant, whereas before it wouldn’t since Error isn’t a singleton type. Thanks to this, narrowing works correctly in the body of the unwrap function.

type Either<T> =
    | { error: Error; data: null }
    | { error: null; data: T };

function unwrap<T>(result: Either<T>) {
    if (result.error) {
        // Here 'error' is non-null
        throw result.error;
    }

    // Now 'data' is non-null
    return result.data;
}

Editing improvements

The TypeScript project doesn’t simply consist of a compiler/type-checker. The core components of the compiler also provide a cross-platform open-source language service that can power “smarter” editor features like go-to-definition, find-all-references, and a number of quick fixes and refactorings. TypeScript 3.2 brings some small quality of life improvements.

Quick fixes

Implicit any suggestions and “infer from usage” fixes

We strongly suggest users take advantage of stricter checking when possible. noImplicitAny is one of these stricter checking modes, and it helps ensure that your code is as fully typed as possible which also leads to a better editing experience.

Unfortunately it’s not all roses for existing codebases. noImplicitAny is a big switch across codebases which can lead to a lot of error messages and red squiggles in your editor as you type code. The experience can be jarring to turn on just to find out which variables need types.

In this release, TypeScript produces suggestions for most variables and parameters that would have been reported as having implicit any types. When an editor reports these suggestions, TypeScript also provides a quick fix to automatically infer the types for you.

Types automatically filled in for implicit any parameters.

This can make migrating an existing codebase to TypeScript even easier, and we expect it will make migrating to noImplicitAny a breeze.

Going a step further, TypeScript users who are type-checking their .js files using checkJs or the // @ts-check comments can now also get the same functionality with JSDoc types!

Automatically generating JSDoc types for implicit any parameters in JavaScript files.

Other fixes

TypeScript 3.2 also brings two smaller quick fixes for small mistakes.

  • Add a missing new when accidentally calling a constructor.
  • Add an intermediate assertion to unknown when types are sufficiently unrelated.

Thanks to GitHub users iliashkolyar and ryanclarke respectively for these changes!

Improved formatting

Thanks to saschanaz, TypeScript is now smarter about formatting several different constructs. Listing all of them might be a bit cumbersome, but you can take a look at the pull request here.

Breaking changes and deprecations

lib.d.ts changes

TypeScript has recently moved more to generating DOM declarations in lib.d.ts by leveraging IDL files provided by standards groups. Upgraders should note take note of any issues they encounter related to the DOM and report them.

More specific types

Certain parameters no longer accept null, or now accept more specific types as per the corresponding specifications that describe the DOM.

More platform-specific deprecations

Certain properties that are WebKit-specific have been deprecated. They are likely to be removed in a new version.

wheelDelta and friends have been removed.

wheelDeltaX, wheelDelta, and wheelDeltaZ have all been removed as they are deprecated properties on WheelEvents.

As a solution, you can use deltaX, deltaY, and deltaZ instead. If older runtimes are a concern, you can include a file called legacy.d.ts in your project and write the following in it:

// legacy.d.ts

interface WheelEvent {
     readonly wheelDelta: number;
     readonly wheelDeltaX: number;
     readonly wheelDeltaZ: number;
}

JSX resolution changes

Our logic for resolving JSX invocations has been unified with our logic for resolving function calls. While this has simplified the compiler codebase and improved some use-cases, there may be some differences which we may need to reconcile. These changes are likely unintentional so they are not breaking changes per se, but upgraders should note take note of any issues they encounter and report them.

A note for NuGet and Visual Studio 2015

We have some changes coming in TypeScript 3.2 for NuGet and VS2015 users.

First, TypeScript 3.2 and future releases will only ship an MSBuild package, and not a standalone compiler package. Second, while our NuGet packages previously shipped with the Chakra JavaScript engine to run the compiler, the MSBuild package now depends on an invokable version of Node.js to be present. While machines with newer versions of Visual Studio 2017 (versions 15.8 and above) will not be impacted, some testing/CI machines, users with Visual Studio 2015, and users of Visual Studio 2017 15.7 and below may need to install Node.js directly from the site, through Visual Studio 2017 Build Tools (read more here), or via a redistribution of Node.js over NuGet. Otherwise, upgrading to TypeScript 3.2 might result in a build error like the following:

The build task could not find node.exe which is required to run the TypeScript compiler. Please install Node and ensure that the system path contains its location.

Lastly, TypeScript 3.2 will be the last TypeScript release with editor support for Visual Studio 2015 users. To stay current with TypeScript, we recommend upgrading to Visual Studio 2017 for the latest editing experience.

What’s next

Our next release of TypeScript is slated for the end of January. Some things we’ve got planned on the horizon are partial type argument inference and a quick fix to scaffold out declaration files that don’t exist on DefinitelyTyped. While this list is in flux, you can keep track of our plans on the TypeScript Roadmap.

We hope that TypeScript 3.2 makes your day-to-day coding more enjoyable, whether it comes to expressivity, productivity, or ease-of-use. If you’re enjoying it, drop us a line on Twitter at @typescriptlang; and if you’ve got ideas on what we should improve, file an issue on GitHub.

Happy hacking!

– Daniel Rosenwasser and the TypeScript team

The post Announcing TypeScript 3.2 appeared first on TypeScript.

Modernize alerting using Azure Resource Manager storage accounts

$
0
0

Classic alerts in Azure Monitor will reach retirement this coming June. We recommend that you migrate your classic alert rules defined on your storage accounts, especially if you want to retain alerting functionality with the new alerting platform. If you have classic alert rules configured on classic storage accounts, you will need to upgrade your accounts to Azure Resource Manager (ARM) storage accounts before you migrate alert rules.

For more information on the new Azure Monitor service and classic alert retirement read the article, “Classic alerts in Azure Monitor to retire in June 2019.”

Identify classic alert rules

You should first find all classic alert rules before you migrate. The following screenshot shows how you can identify classic alert rules in the Azure portal. Please note, you can filter by subscription so you can find all classic alert rules without checking on each resource separately.

Check classic alert rules

Migrate classic storage accounts to ARM

New alerts do not support classic storage accounts, only ARM storage accounts. If you configured classic alert rules on a classic storage account you will need to migrate to an ARM storage account.

You can use "Migrate to ARM" to migrate using the storage menu on your classic storage account. The screenshot below shows an example of this. For more information on how to perform account migration see our documentation, “Platform-supported migration of laaS resources from classic to Azure Resource Manager.”

Re-create alert rules in new alerting platform

After you have migrated the storage account to ARM, you then need to re-create your alert rules. The new alerting platform supports alerting on ARM storage accounts using new storage metrics. You can read more about new storage metric definitions in our documentation, “Azure Storage metrics in Azure Monitor.” In the storage blade, the menu is named "Alert" for the new alerting platform.

Before you re-create alert rules as a new alert for your storage accounts, you may want to understand the difference between classic metrics and new metrics and how they are mapped. You can find detailed mapping in our documentation, “Azure Storage metrics migration.”

The following screenshot shows how to create an alert based on “UsedCapacity.”

Create alert rule on UsedCapacity

Some metrics include dimension, which allows you to see and use different dimension value types. For example, the transactions metric has a dimension named “ResponseType” and the values represent different type of errors and success. You can create an alert to monitor transactions on a particular error such as “ServerBusyError” or “ClientOtherError” with “ResponseType”.

The following screenshot shows how to create an alert based on Transactions with “ClientOtherError.”

In the list of dimension values, you won't see all supported values by default. You will only see values that have been triggered by actual requests. If you want to monitor conditions that have not happened, you can add a custom dimension value during alert creation. For example, when you have not had anonymous requests to your storage account yet, you can still setup alerts in advance to monitor such activity from upcoming requests.

The following screenshot shows how to add a custom dimension value to monitor upcoming anonymous transactions.

Add dimension value for alert

We recommend creating the new alert rules first, verify they work as intended, then remove the classic alerts.

Azure Monitor is a unified monitoring service that includes alerting and other monitor capabilities. You can read more in the “Azure Monitor documentation.”

Preview: Distributed tracing support for IoT Hub

$
0
0

Most IoT solutions, including our Azure IoT reference architecture, use several different services. An IoT message, starting from the device, could flow through a dozen or more services before it is stored or visualized. If something goes wrong in this flow, it can be very challenging to pinpoint the issue. How do you know where the message is dropped? For example, you have an IoT solution that uses five different Azure services and 1,500 active devices. Each device sends ten device-to-cloud messages/second (for a total of 15,000 messages/second), but you notice that your web app sees only 10,000 messages/second. Where is the issue? How do you find the culprit?

To completely understand the flow of messages through IoT Hub, you must trace each message's path using unique identifiers. This process is called distributed tracing. Today, we're announcing distributed tracing support for IoT Hub, in public preview.

Get started with distributed tracing support for IoT Hub

Distributed tracing support for Azure IoT Hub

With this feature, you can:

  • Precisely monitor the flow of each message through IoT Hub using trace context. This trace context includes correlation IDs that allow you to correlate events from one component with events from another component. It can be applied for a subset or all IoT device messages using device twin.
  • Automatically log the trace context to Azure Monitor diagnostic logs.
  • Measure and understand message flow and latency from devices to IoT Hub and routing endpoints.
  • Start considering how you want to implement distributed tracing for the non-Azure services in your IoT solution.

In the public preview, the feature will be available for IoT Hubs created in select regions.

To get started:

Developments in AzureR

$
0
0

by Hong Ooi, senior data scientist, Microsoft Azure

The AzureR packages have now been on CRAN for a couple of months, so I thought I'd provide an update on developments in the works.

First, AAD authentication has been moved into a new package, AzureAuth, so that people who just want OAuth tokens can get it without any other baggage. This has many new features:

  • Supports both AAD v1.0 and v2.0
  • Tokens are cached in a user-specific directory using the rappdirs package, typically c:users<username>localAzureR on Windows and ~/.local/share/AzureR on Linux
  • Supports 4 authentication methods: authorization_code, device_code, client_credentials and resource_grant
  • Supports logging in with a username or with a certificate

In the longer term, the hope is for AzureAuth to be something like the R equivalent of the ADAL client libraries. Things to add include dSTS, federated logins, and more.

AzureRMR 2.0 has a new login framework and no longer requires you to create a service principal (although you can still provide your own SP if desired). Running create_azure_login() logs you into Azure interactively, and caches your credentials; in subsequent sessions, run get_azure_login() to login without having to reauthenticate.

AzureStor 2.0 has several new features mostly for more efficient uploading and downloading:

  • Parallel file transfers, using a pool of background processes. This greatly improves the transfer speed when working with lots of small files.
  • Transfer files to or from a local connection. This lets you transfer in-memory R objects without having to create a temporary file first.
  • Experimental interface to AzCopy version 10. This lets you do essentially anything that AzCopy can do, from within R. (Note: AzCopy 10 is quite a different beast to AzCopy 8.x.)
  • A new framework of generic methods, to organise all the various storage-type-specific functions.

A new AzureKusto package is in the works, for working with Kusto/Azure Data Explorer. This includes:

  • All basic functionality including querying, engine management commands, and ingesting
  • A dplyr interface written by Alex Kyllo.

AzureStor 2.0 is now on CRAN, and the others should also be there over the next few weeks. As always, if you run into any problems using the packages, feel free to contact me.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>