Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Announcing general availability of Apache Hadoop 3.0 on Azure HDInsight

$
0
0

imageToday we’re announcing the general availability of Apache Hadoop 3.0 on Azure HDInsight. Microsoft Azure is the first cloud provider to offer customers the benefit of the latest innovations in the most popular open source analytics projects, with unmatched scalability, flexibility, and security. With the general availability of Apache Hadoop 3.0 on Azure HDInsight, we are building upon existing capabilities with a number of key enhancements that further improve performance and security, and deepen support for the rich ecosystem of big data analytics applications.

Bringing Apache Hadoop 3.0 and supercharged performance to the cloud

Apache Hadoop 3.0 represents over 5 years of major upgrades contributed by the open source community across key Apache frameworks such as Hive, Spark, and HBase. New features in Hadoop 3.0 provide significant improvements to performance, scalability, and availability, reducing total cost of ownership and accelerating time-to-value.

  • Apache Hive 3.0 – With ACID transactions on by default and several performance improvements, this latest version of Hive enables developers to build “traditional database” applications on massive data lakes. This is particularly important for enterprises who need to build GDPR/privacy compliant big data applications.
  • Hive Warehouse Connector for Apache Spark – With the Hive Warehouse Connector, the Spark and Hive worlds are coming closer together. The new connector moves the integration from the metastore layer to the query engine layer. This enables higher, more reliable performance with predicate pushdown and other functionality.
  • Apache HBase 2.0 and Apache Phoenix 5.0 – Apache HBase 2.0 and Apache Phoenix 5.0 introduce a number of performance, stability, and integration improvements. With HBase 2.0, periodic reorganization of the data in the memstore with in-memory compactions improves performance as data is not flushed or read too often from remote cloud storage. Phoenix 5.0 brings more visibility into queries with query log by introducing a new system table that captures information about queries that are being run against the cluster.
  • Spark IO Cache – IO Cache is a data caching service for Azure HDInsight that improves the performance of Apache Spark jobs. IO Cache also works with Apache TEZ and Apache Hive workloads, which can be run on Apache Spark clusters.

Enhanced enterprise grade security

Enterprise grade security and compliance is a critical requirement for all customers building big data applications that store or process sensitive data in the cloud.

  • Enterprise Security Package (ESP) support for Apache HBase – With the general availability of ESP support for HBase, customers can ensure that users authenticate to their HDInsight HBase clusters using their corporate domain credentials and are subject to rich, fine-grained access policies (authored and managed in Apache Ranger).
  • Bring Your Own Key (BYOK) support for Apache Kafka – Customers can now bring their own encryption keys into the Azure Key Vault and use them to encrypt the Azure Managed Disks storing their Apache Kafka messages. This gives them a high degree of control over the security of their data.

Rich developer tooling

Azure HDInsight offers rich development experiences with various integrated development environment (IDE) extensions, notebooks, and SDKs.

  • SDKs general availability – HDInsight SDKs for .NET, Python, and Java enable developers to easily manage clusters using the language of their choice.
  • VSCodeHDInsight VSCode extension enables developers to submit Hive batch jobs, interactive Hive queries, and PySpark scripts to HDInsight 4.0 clusters.
  • IntelliJAzure Toolkit for IntelliJ enables Scala and Java developers to program Spark, Scala, and Java projects with built-in templates. Developers can easily perform local run, local debug, open interactive sessions, and submit Scala/Java projects to HDInsight 4.0 Spark clusters directly from the IntelliJ integrated development environment.

Broad application ecosystem

Azure HDInsight supports a vibrant application ecosystem with a variety of popular big data applications available on Azure Marketplace, covering scenarios from interactive analytics to application migration. We are excited to support applications such as:

  • Starburst (Presto) – Presto is an open source, fast, and scalable distributed SQL query engine that allows you to analyze data anywhere within your organization. Architected for the separation of storage and compute, Presto can easily query data in Azure Blob Storage, Azure Data Lake Storage, SQL and NoSQL databases, and other data sources. Learn more and explore Starburst Presto on Azure Marketplace.
  • Kyligence – Kyligence is an enterprise online analytic processing (OLAP) engine for big data, powered by Apache Kylin. Kyligence enables self-service, interactive business analytics on Azure, achieving sub-second query latencies on trillions of records and seamlessly integrating existing Hadoop and BI systems. Learn more and explore Kyligence on Azure Marketplace.
  • WANDisco – WANDisco Fusion de-risks migration to the cloud by ensuring disruption-free data migrations, easy and seamless extensions of Spark and Hadoop deployments, and short or long term hybrid data operations. Learn more and explore WANDisco on Azure Marketplace.
  • Unravel Data – Unravel provides a unified view across your entire data stack, providing actionable recommendations and automation for tuning, troubleshooting, and improving performance. The Unravel Data app uses Azure Resource Manager, allowing customers to connect Unravel to a new or existing HDInsight cluster with one click. Learn more and explore Unravel on Azure Marketplace.
  • Waterline Data – With Waterline Data Catalog and HDInsight, customers can easily discover, organize, and govern their data, all at the global scale of Azure. Learn more and explore Waterline on Azure Marketplace.

Get started now

We look forward to seeing what innovations you will bring to your users and customers with Azure HDInsight. Read the developer guide and follow the quick start guide to learn more about implementing open source analytics pipelines on Azure HDInsight. Stay up-to-date on the latest Azure HDInsight news and exciting features coming in the near future by following us on Twitter (#AzureHDInsight). For questions and feedback, please reach out to AskHDInsight@microsoft.com.

About Azure HDInsight

Azure HDInsight is an enterprise-ready service for open source analytics that enables customers to easily run popular Apache open source frameworks including Apache Hadoop, Spark, Kafka, and others. The service is available in 30 public regions and Azure Government Clouds in the US and Germany. Azure HDInsight powers mission critical applications for a wide range of sectors and use cases including ETL, streaming, and interactive querying.

 


.NET application migration using Azure App Services and Azure Container Services

$
0
0

Designed for developers and solution architects who need to understand how to move business critical apps to the cloud, this online workshop series gets you hands-on with a proven process for migrating an existing ASP.NET based application to a container based application. Join us live for 90 minutes on Wednesday and Fridays through May 3 to get expert guidance and to get your questions answered.

The optional (but highly recommended) hands-on labs that accompany this series give you experience building a proof of concept (POC) that will deliver a multi-tiered web app solution from a Virtual Machine architecture into Azure, leveraging Azure Platform Services and different Azure container solutions available today. You will also migrate the underlying database from a SQL 2014 Virtual Machine architecture to SQL Azure.

At the end of this series you will have a good understanding of container concepts, Docker architecture and operations, Azure Container Services, Azure Kubernetes Services and SQL Azure PaaS solutioning.

Part 1: Digital App Transformation with Azure

The first session covers the strategic ways to modernize your existing .NET Framework applications. This includes the different choices Azure provides for app modernization, starting from VM lift & shift, to Platform as a Service (PAAS) as well as an overview of the container services and orchestrators Azure natively provides.

Watch on demand

Part 2: Infrastructure as Code using ARM templates

ARM (Azure Resource Manager) templates are Azure’s answer to Infrastructure as Code, and they can do much more than just deploy infrastructure resources. This session will teach you about how Infrastructure as Code enables faster execution, reduces risk, reduces costs, and integrates with DevOps. You’ll learn about why you should use ARM templates for automated deployment and continuous integration, how to find Azure Quickstart Templates on GitHub, and how to author ARM templates with Visual Studio.

Besides learning how ARM templates deploy Azure resources, we take it a step further and walk you through the full process to automate VM configuration as well. After this session you’ll be able to work through the labs we provide, where you will setup your Azure subscription and deploy the source Virtual Machine environment with Visual Studio 2017, deploying the baseline 2-tier application workload we will be using throughout the workshop series.

Watch on demand

Part 3: Azure Database Solutions | SQL Azure

We’ll start by covering SQL, IaaS, and PaaS options, including removing security and isolation concerns and how to integrate high availability / disaster recovery. You’ll see an in-depth demo of deploying Azure SQL where we will highlight key features.

Then we’ll dive deep on migration options and highlight database migration tools, so that you’ll be able to complete the accompanying lab where you migrate a SQL VM database to SQL Azure using SQL Management Studio.

April 17, 2019 10 am Pacific / 1 pm Eastern

Register to join live

Part 4: Azure App Services | Azure Web Apps

In this demo filled session, you’ll learn about key features, including deployment slots, scaling and autoscaling, pricing tiers, integrated backup, and app insights allowing you to understand the core capabilities and strengths of Azure Web Apps. The session concludes with Azure Web Apps for Containers, with sample architecture and deployment life cycle. In the lab for this session you’ll migrate a legacy ASP.NET application to Azure Web Apps with Visual Studio.

April 19, 2019 10 am Pacific / 1 pm Eastern

Register to join live

Part 5: Docker Containers

Docker Containers are the global standard and are natively supported in Azure, offering enterprises an interesting and flexible way to migrate legacy apps for both future proofing and cost benefits. In this session you’ll see detailed demos of installing Docker for Windows, running common Docker CLI operations, and how to build a Docker Image using both the CLI and Visual Studio 2017. We’ll also teach you important tips for troubleshooting Docker builds. After this session you’ll be able to complete the lab where you will containerize a legacy ASP.NET application with Docker CE for Windows.

April 24, 2019 10 am Pacific / 1 pm Eastern

Register to join live

Part 6: Azure Container Registry | Azure Container Instance

Azure Container Registry is a managed Docker registry service based on the open-source Docker Registry 2.0, which allows you to create and maintain Azure container registries to store and manage your private Docker container images. Azure Container Instance offers the fastest and simplest way to run a container in Azure, without having to provision any virtual machines and without having to adopt a higher-level service. You’ll learn about both ACR and ACI, and how they work closely together. After the session you’ll be able to complete the lab where you will deploy Azure Container Registry, use Azure Container Instance, and run your containerized workload.

April 26, 2019 10 am Pacific / 1 pm Eastern

Register to join live

Part 7: Container orchestration with Azure Container Services and Azure Kubernetes Services

This session provides a deep dive view on working with container orchestration in Azure and covers both Azure Container Services (ACS) and Azure Kubernetes Services (AKS). We’ll cover the similarities, differences, and roadmap of both, as well as walking through several typical container orchestrator tasks. To prepare you for the lab where you will deploy ACS with Kubernetes and deploy AKS, we’ll present detailed demos and provide samples for managing and deploying. You’ll also see a demo of running a Docker Hub image in AKS.

May 1, 2019 10 am Pacific / 1 pm Eastern

Register to join live

Part 8: Managing and monitoring Azure Kubernetes Services

You’ll learn enabling container scalability in AKS, monitoring AKS, and using Kubernetes dashboard with AKS. We’ll present lots of samples and detailed demos for running a Container Registry Image inside Azure Container Services, scaling AKS, and monitoring AKS in Azure. For the final lab in this workshop series, you will get hands on managing and monitoring AKS.

May 3, 2019 10 am Pacific / 1 pm Eastern

Register to join live

All sessions will be recorded and available for on demand viewing after they are delivered live, and the labs and other materials will be available on GitHub.

Deploying Grafana for production deployments on Azure

$
0
0

This blog is co-authored by Nick Lopez, Technical Advisor at Microsoft.

Grafana is one of the popular and leading open source tools for visualizing time series metrics. Grafana has quickly become the preferred visualization tool of choice for developers and operations teams for monitoring server and application metrics. Grafana dashboards enable operation teams to quickly monitor and react to performance, availability, and overall health of the service. You can now also use it to monitor Azure services and applications by leveraging the Azure Monitor data source plugin, built by Grafana Labs. This plugin enables you to include all metrics from Azure Monitor and Application Insights in your Grafana dashboards. If you would like to quickly setup and test Grafana with Azure Monitor and Application Insights metrics, we recommend you refer to the Azure Monitor Documentation.

Grafana dashboard using Azure Monitor as a data source to display metrics for Contoso dev environment.

 

Grafana server image in Azure Marketplace provides a great QuickStart deployment experience. The image provisions a virtual machine (VM) with a pre-installed Grafana dashboard server, SQLite database  and the Azure plugin. The default setup with a single VM deployment is great for a proof of concept study and testing. For high availability of monitoring dashboards for your critical applications and services, it’s essential to think of high availability of Grafana deployments on Azure. The following is the proposed and proven architecture to setup Grafana for high availability and security on Azure.

Setting up Grafana for production deployments

Grafana high availability deployment architecture on Azure.

Grafana Labs recommends setting up a separate highly available shared MySQL server for setting up Grafana for high availability. The Azure Database for MySQL and MariaDB are managed relational database services based on the community edition of MySQL and the MariaDB database engine. The service provides high availability at no additional cost, predictable performance, elastic scalability, automated backups and enterprise grade security with secure sockets layer (SSL) support, encryption at rest, advanced threat protection, and VNet service endpoint support. Utilizing a remote configuration database with Azure Database for MySQL or Azure Database for MariaDB service allows for horizontal scalability and high availability of Grafana instances required for enterprise production deployments.

Leveraging Bitnami Multi-Tier Grafana templates for production deployments

Bitnami lets you deploy a multi-node, production ready Grafana solution from the Azure Marketplace with just a few clicks. This solution uses several Grafana nodes with a pre-configured load balancer and Azure Database for MariaDB for data storage. The number of nodes can be chosen at deployment time depending on your requirements. Communication between the nodes and the Azure Database for MariaDB service is also encrypted with SSL to ensure security.

A key feature of Bitnami's Grafana solution is that it comes pre-configured to provide a fault-tolerant deployment. Requests are handled by the load balancer, which continuously tests nodes to check if they are alive and automatically reroutes requests if a node fails. Data (including session data) is stored in the Azure Database for MariaDB and not on the individual nodes. This approach improves performance and protects against data loss due to node failure.

For new deployments, you can launch Bitnami Grafana Multi-Tier through the Azure Marketplace!

Configuring existing installations of Grafana to use Azure Database for MySQL service

If you have an existing installation of Grafana that you would like to configure for high availability, you can use the following steps that demonstrate configuring Grafana instance to use Azure Database for MySQL server as the backend configuration database. In this walkthrough, we will be using an example of Ubuntu with Grafana installed and configure Azure Database for MySQL as a remote database for Grafana setup.

  1. Create an Azure Database for MySQL server with the General Purpose tier which is recommended for production deployments. If you are not familiar with the database server creation, you can read the QuickStart tutorial to familiarize yourself with the workflow. If you are using Azure CLI, you can simply set it up using az mysql up.
  2. If you have already installed Grafana on the Ubuntu server, you’ll need to edit the grafana.ini file to add the Azure Database for MySQL parameters. As per the Grafana documentation on the Database settings, we will focus on the database parameters noted in the documentation. Please note: The username must be in the format user@server due to the server identification method of Azure Database for MySQL. Other formats will cause connections to fail.
  3. Azure Database for MySQL supports SSL connections. For enterprise production deployments, it is recommended to always enforce SSL. Additional information around setting up SSL with Azure Database for MySQL can be found in the Azure Database for MySQL documentation. Most modern installations of Ubuntu will have the necessary Baltimore Cyber Trust CA certificate already installed in your /etc/ssl/certs location. If needed, you can download the SSL Certificate CA used for Azure Database for MySQL from  this location. The SSL mode can be provided in two forms, skip-verify and true. With skip-verify we will not validate the certificate provided but the connection is still encrypted. With true we are going to ensure that the certificate provided is validated   by the Baltimore CA. This is useful for preventing “man in the middle” attacks. Note that in both situations, Grafana expects the certificate authority (CA) path to be provided.
  4. Next, you have the option to store the sessions of users in the Azure DB for MySQL in the table session. This is configured in the same grafana.ini under the session section. This is beneficial for instance in situations where you have load balanced environments to maintain sessions for users accessing Grafana. In the provider_config parameter, we need to include the user@server, password, full server and the TLS/SSL method. In this manner, this can be true or ssl-verify. Note that this is the go-sql-driver/mysql driver where more documentation is available.
  5. After this is all set, you should be able to start Grafana and verify the status with the commands below:
  • systemctl start grafana-server
  • systemctl status grafana-server

If you see any errors or issues, the default path for logging is /var/log/grafana/ where you can confirm what is preventing the startup. The following is a sample error where the username was not provided as user@server but rather just user.

lvl=eror msg="Server shutdown" logger=server reason="Service init failed: Migration failed err: Error 9999: An internal error has occurred. Please retry or report your issues.

Otherwise you should see the service in an Ok status and the initial startup will build all the necessary tables in the Azure DB for MySQL database.

Key takeaways

  • The single VM setup for Grafana is great for quick start, testing and a proof of concept study but it may not be suitable for production deployments.
  • For enterprise production deployments of Grafana, separating the configuration database to the dedicated server enables high availability and scalability.
  • The Bitnami Grafana Multi-Tier template provides production ready template leveraging the scale out design and security to provision Grafana with a few clicks with no extra cost.
  • Using managed database services like Azure Database for MySQL for production deployments provides built-in high availability, scalability, and enterprise security for the database repository.

Additional resources

Get started with Bitnami Multi-Tier Solutions on Microsoft Azure

Monitor Azure services and applications using Grafana

Monitor your Azure services in Grafana

Setting up Grafana for high availability

Azure Database for MySQL documentation

Acknowledgments

Special thanks to Shau Phang, Diana Putnam, Anitah Cantele and Bitnami team for their contributions to the blog post.

QnA Maker updates – April 2019

$
0
0

We are excited to provide several updates for the QnA Maker service. To see previous releases for Conversational AI from Microsoft in March, see this post.

New Bot Framework v4 Template for QnA Maker

The QnA Maker service lets you easily create and manage a knowledge base from your data, including FAQ pages, support URLs, PDFs, and doc files. You can test and publish your knowledge base and then connect it to a bot using a bot framework sample or template. With this update we have simplified the bot creation process by allowing you to easily create a bot from your knowledge base, without the need for any code or settings changes. Find more details on creating a QnA bot on our tutorials page.

After you publish your knowledge base, you can create a bot from the publish page with the Create Bot button. If you have previously created bots, you can click on “View all” to see all the bots that are linked to your current subscription.

fig1

This will lead you to a create template in the Azure portal with all your knowledge base details pre-filled in. Your KB ID is connected to the template automatically, and your endpoint key is pre-populated and should not be changed. You can choose to change some of your pre-filled settings (location, resource name, etc). With the single click of a button your bot can now be deployed. Once you have hit create, wait for a few minutes till your web app bot is deployed.

fig2

Once created, you can now test your bot by opening your bot resource and clicking on “Test in Web Chat”. At this point you can chat with your bot and see answers show up from your knowledge base.

fig3

QnA Maker supports extraction from SharePoint files

Add secured SharePoint data sources to your knowledge base to enrich the knowledge base with questions and answers that may be secured with Active Directory. You can now easily collaborate with others in your organization on source data and connect it to your QnA Maker knowledge base.

You can add all currently supported QnA Maker file types from a SharePoint location, instead of loading offline files. From your SharePoint file select the appropriate file’s URL and add that as a source to your knowledge base in the settings page, or during creation.

fig4

The first time you add a SharePoint secured file you will need to explicitly authenticate your Active Directory account and grant permission to QnA Maker through your Active Directory manager.

Find more details on adding SharePoint Sources in QnA Maker in "Add a secured Sharepoint data source to your knowledge base" documentation.

QnA Maker help bot

We are also excited to announce we recently added the QnA Maker help bot to our QnA Maker page. If you have any questions on QnA Maker you can now ask the help bot which is pinned on the left corner on the QnA Make homepage. If you’d like to see what is behind this bot the sample code is available on GitHub.

fig5

QnA Maker support in healthcare bot

The Microsoft Health Bot Service is a SaaS solution that empowers Microsoft partners to build and deploy compliant, AI-powered health agents. This allows them to offer their users intelligent, personalized access to health-related information and interactions through a natural conversation experience.

The QnA Maker service is now integrated with healthcare bot, allowing you to extend the healthcare bot experience by connecting it to your knowledgebase (KB) or easily add a set of chit-chat as a starting point for your healthcare bot's personality.

Find more details on adding QnA Maker model to healthcare bot in "Extend your Healthcare Bot with QnA Maker" documentation.

fig 6

Get Started

View our quick start guide to start creating your QnA Maker knowledge base, and then publish and deploy it in a bot (more information on our tutorials page here). You can also give your bot a personality using chit-chat and make it more intelligent with time using Active Learning.

And if you have any questions or feedback please go ahead and chat with the QnA Maker help bot!

Psst! Bing Maps has new APIs and features to share at Microsoft Build 2019

$
0
0

Get a first-hand look at the latest tools and new features from the Bing Maps team! We are looking forward to meeting you at Microsoft Build 2019 (May 6th through the 8th, at the Washington State Convention Center in Seattle Washington).

The Bing Maps booth will be in the Modern Workplace area on the Expo floor and offers a great opportunity to chat with the Bing Maps team to learn more about our mapping and location-based services. You can talk to us about any location-based scenarios you have or dig into how the flexible and feature-rich Bing Maps enterprise portfolio of services can take your fleet management, supply chain, business intelligence or custom solutions to the next level.

Microsoft Build

Additionally, we’ll announce new features and APIs at the event, so come by the booth and learn first-hand how the Bing Maps APIs provide advanced business solutions that go beyond standard mapping services and help you deliver applications with location intelligence and innovative user experiences.

If you are not able to attend Microsoft Build 2019, we will share news and updates on the Bing Maps blog during the conference and post recordings of the Bing Maps sessions on https://www.microsoft.com/en-us/maps.

Azure.Source – Volume 78

$
0
0

Preview | News & updates | Technical content | Azure shows | Events | Customers, partners, and industries

Now in preview

Hybrid storage performance comes to Azure

When it comes to adding a performance tier between compute and file storage, Avere Systems has led the way with its high-performance caching appliance known as the Avere FXT Edge Filer. Last week at NAB, attendees will got a first look at the new Azure FXT Edge Filer, now with even more performance, memory, SSD, and support for Azure Blob. Since Microsoft’s acquisition of Avere last March, we’ve been working to provide an exciting combination of performance and efficiency to support hybrid storage architectures with the Avere appliance technology. We are currently previewing the FXT 6600 model at customer sites, with a second FXT 6400 model becoming available with general availability.

Photograph of an Azure FXT Edge Filer

News and updates

Want to evaluate your cloud analytics provider? Here are the three questions to ask.

In February, an independent study by GigaOm compared Azure SQL Data Warehouse, Amazon Redshift, and Google BigQuery using the highly recognized TPC-H benchmark. They found that Azure SQL Data Warehouse is up to 14 times faster and costs 94 percent less than other cloud providers. And today, we are pleased to announce that in GigaOm’s second benchmark report, this time with the equally important TPC-DS benchmark, Azure SQL Data Warehouse is again the industry leader. Not Amazon Redshift. Not Google BigQuery. These results prove that Azure is the best place for all your analytics.

Introducing the App Service Migration Assistant for ASP.NET applications

In June 2018, we released the App Service Migration Assessment Tool. The Assessment Tool was designed to help customers quickly and easily assess whether a site could be moved to Azure App Service by scanning an externally accessible (HTTP) endpoint. Today we’re pleased to announce the release of an updated version, the App Service Migration Assistant! The new version helps customers and partners move sites identified by the assessment tool by quickly and easily migrating ASP.Net sites to App Service. Read this blog to learn more about the tool and begin your migration.

Screenshot of App Service Migration Tool landing page

Expanding Azure IoT certification service to support Azure IoT Edge devices

In December 2018, Microsoft launched the Azure IoT certification service, a web-based test automation workflow to streamline the certification process through self-serve tools. Now we are taking steps to expand the service to  also support Azure IoT Edge Device certification. An Azure IoT Edge device is a device comprised of three key components: IoT Edge modules, IoT Edge runtime, and a cloud-based interface. Learn more about these three components in this blog explaining IoT Edge.

Azure Updates

Learn about important Azure product updates, roadmap, and announcements. Subscribe to notifications to stay informed.

Technical content

Smarter, faster, safer: Azure SQL Data Warehouse is simply unmatched

We want to call attention to the exciting news that Azure SQL Data Warehouse has again outperformed other cloud providers in the most recent GigaOm benchmark report. This is the result of relentless innovation and laser-focused execution on providing new features our customers need, all while reducing prices so customers get industry-leading performance at the best possible value. In this blog, we take a closer look at the technical capabilities of these new features and, most importantly, how you can start using them today.

Azure Security Center exposes crypto miner campaign

Azure Security Center discovered a new cryptocurrency mining operation on Azure customer resources. The operation took advantage of an old version of known open-source CMS, with a known RCE vulnerability as the entry point, and then after using the CRON utility for persistency, it mines “Monero” cryptocurrency using a new compiled binary of the “XMRig” open-source crypto mining tool. Check out our blog for details.

Screenshot of binary editor showing malcious binary packed with UPX packer

You gotta keep privileges separated

When writing scripts for automation or building out a service, don't run under your own credentials. This creates a single point of failure on you for the service. It's also good practice to separate out concerns between environments. This way even if someone accidentally runs a test command against production, it won't have disastrous results. One recommended approach is to use service principals. An Azure service principal is an identity for use with applications, services, and tools to access Azure resources. Using service principals allows us to assign specific permissions that are limited in scope to precisely what is required so we can minimize the impact if it's compromised! This blog explains how.

How do teams work together on an automated machine learning project?

When it comes to executing a machine learning project in an organization, data scientists, project managers, and business leads need to work together to deploy the best models to meet specific business objectives. A central objective of this step is to identify the key business variables that the analysis needs to predict. We refer to these variables as the model targets, and we use the metrics associated with them to determine the success of the project. In this use case, we look at how a data scientist, project manager, and business lead at a retail grocer can leverage automated machine learning and Azure Machine Learning service to reduce product overstock.

How to Use Azure Pipeline Task and Job Conditions

An Azure Pipeline Job is a grouping of tasks that run sequentially on the same target. In many cases, you will want to only execute a task or a job if a specific condition has been met. Azure Pipeline conditions allow us to define conditions under which a task or job will execute. In this blog, we will detail a common situation in which pipeline conditions are helpful, the configuration of this condition, and what documentation links offer more information.

Moving your database to Azure

In this session we show you how we migrated an on-premises MongoDB database to Azure Cosmos DB and SQL Server database to an Azure SQL Server Managed Instance. You’ll learn about data preparation decisions, performing the migration, and ensuring your application has zero downtime while switching over to the cloud hosted database providers.

Azure Stack IaaS – part seven of a series

Most apps get delivered by a team. When your team delivers the app through virtual machine (VMs), it is important to coordinate efforts. Born in the cloud to serve teams from all over the world, Azure and Azure Stack have some handy capabilities to help you coordinate VM operations across your team. In this blog, we look at features such as single sign-in, role-based access, and collaborating with people outside your organization.

How to accelerate DevOps with Machine Learning lifecycle management

DevOps is the union of people, processes, and products to enable the continuous delivery of value to end users. DevOps for machine learning is about bringing the lifecycle management of DevOps to Machine Learning. Utilizing Machine Learning, DevOps can easily manage, monitor, and version models while simplifying workflows and the collaboration process. Effectively managing the Machine Learning lifecycle is critical for DevOps’ success. And the first piece to machine learning lifecycle management is building your machine learning pipeline or pipelines. We explain how.

Illustration of a machine learing pipeline

How do teams work together on an automated machine learning project?

When it comes to executing a machine learning project in an organization, data scientists, project managers, and business leads need to work together to deploy the best models to meet specific business objectives. A central objective of this step is to identify the key business variables that the analysis needs to predict. We refer to these variables as the model targets, and we use the metrics associated with them to determine the success of the project.

How to stay informed about Azure service issues

Azure Service Health helps you stay informed and take action when Azure service issues like outages and planned maintenance affect you. It provides you with a personalized dashboard that can help you understand issues that may be impacting resources in your Azure subscriptions. For any event, you can get guidance and support, share details with your colleagues, and receive issue updates. We’ve posted a new video series to help you learn how to use Azure Service Health and ensure you stay on top of service issues.

How to stay on top of Azure best practices

Optimizing your cloud workloads can seem like a complex and daunting task. We created Azure Advisor, a personalized guide to Azure best practices, to make it easier to get the most out of Azure.

How Skype modernized its backend infrastructure using Azure Cosmos DB

Founded in 2003, Skype has grown to become one of the world’s premier communication services, making it simple to share experiences with others wherever they are. Since its acquisition by Microsoft in 2010, Skype has grown to more than four billion total users, more than 300 million monthly active users, and more than 40 million concurrent users. In a three-part series, we discuss how Skype used Azure Cosmos DB to solve real-world challenges.

Azure shows

Episode 274 - Reliability Engineering | The Azure Podcast

David Blank-Edelman, a Senior Cloud Advocate at Microsoft, gives us some great insight into what customers should be thinking about when it comes to the reliability of their cloud applications.

Using the new Basic Process in Azure DevOps | DevOps Lab

In this episode, Abel chats with Dan Hellem to walk through the details of the new Basic process in Azure DevOps and learn how it works.

Redis Edge on Azure IoT Edge | Internet of Things Show

RedisEdge from Redis Labs is a purpose-built database for the demanding conditions at the IoT edge. It has the ability to ingest millions of writes per second with <1ms latency, has a 5MB footprint, and is available on ARM32, ARM64, and x64 architectures.

Azure Monitor action groups | Azure Friday

Azure Monitor action groups enable you to define a list of actions to execute when an alert is triggered. In this episode, we demonstrate how to configure a Service Health alert to use an action group.

How to test Azure Functions | Azure Tips & Tricks

In this edition of Azure Tips and Tricks, learn how to test Azure Functions with unit and integration test methods.

Thumbnail from How to test AzureFunctions

Management Groups, Policy, and Blueprints in Azure Governance | Microsoft Mechanics – Azure

The latest on governing Azure subscriptions for Cloud Architects or Ops Managers. Satya Vel, from the Azure Governance Team, demonstrates Microsoft's approach to Azure Governance overall, which now includes more granular control of policy across different apps and departments in your organization with management groups. You'll also see the new Azure Blueprint templates that simplify setting up your environment to meet specific compliance requirements such as ISO, as well as easier tracking of policy changes and their impact. We'll show you how you can now apply governance capabilities across your Azure Kubernetes workloads.

Thumbnail from ManagementGroups, Policy, and Blueprints in Azure Governance

Party with Palermo at the Microsoft MVP Summit | Azure DevOps Podcast

This week Jeffrey Palermo has a special episode for you all! It is recorded live, from the night before the Microsoft MVP Summit, at Jeffrey’s annual “Party with Palermo!” get-together for MVPs.

Episode 6 - AI Forensics and Pharaoh Hounds | AzureABILITY Podcast

AI/Machine Learning pioneer Andre Magni visits the pod to talk computer intelligence; from Microsoft's AI mission (to amplify human ingenuity with intelligent technology) to data-curation gotchas and modelling pitfalls to identifying dead bodies using AI.

Events

Countdown for Microsoft Build: Things to Do Part 1

Get ready to see the awesome sights of Seattle while you're at Microsoft Build this May, including the Museum of Pop Culture and Wings over Washington.

Microsoft at SAP Sapphire NOW 2019: A trusted path to cloud innovation

In a few weeks, more than 22,000 people from around the globe will converge in Orlando, Florida May 7-9  for the SAP Sapphire NOW and ASUG Annual Conference. Each year, the event brings together thought leaders across industries to find innovative ways to solve common challenges, unlock new opportunities, and take advantage of emerging technologies that are changing the business landscape as we know it. This year, Microsoft has elevated its presence with engaging in-booth experiences and informative sessions that will educate, intrigue, and inspire attendees as they take the next step in their digital transformation journey.

Customers, partners, and industries

Bitnami Apache Airflow Multi-Tier now available in Azure Marketplace

A few months ago, we released a blog post that provided guidance on how to deploy Apache Airflow on Azure. The template in the blog provided a good quick start solution for anyone looking to quickly run and deploy Apache Airflow on Azure in sequential executor mode for testing and proof of concept study.

Leveraging AI and digital twins to transform manufacturing with Sight Machine

Azure has mastered ingesting and storing manufacturing data with services such as Azure IoT Hub and Azure Data Lake, and now our partner Sight Machine has solved for the other huge challenge: data variety. Sight Machine on Azure is a leading AI-enabled analytics platform that enables manufacturers to normalize and contextualize plant floor data in real-time. The creation of these digital twins allows them to find new insights, transform operations, and unlock new value.

Illustration showing Sight Machine’s Digital Manufacturing Platform

Azure AI does that?

Whether you’re just starting off in tech, building, managing, or deploying apps, gathering and analyzing data, or solving global issues —anyone can benefit from using cloud technology. In this post we’ve gathered five cool examples of innovative artificial intelligence (AI) to showcase how you can be a catalyst for real change.


Azure Front Door gets WAF support, a new Premium plan for Azure Functions & changes to Azure alerts | Azure This Week - A Cloud Guru

This time on Azure This Week, Lars covers Azure Front Door which gets Web Application Firewall support, Azure Functions get a new Premium plan for more serverless action, Azure alerts get an overhaul, and a new series - "Azure Fireside Chats" launches on A Cloud Guru.

Thumbnail from Azure FrontDoor gets WAF support, a new Premium plan for Azure Functions& changes to Azure alerts

In-Editor Documentation for CMake in Visual Studio

$
0
0

Visual Studio 2019 version 16.1 Preview 1 introduces in-editor documentation for CMake commandsvariables, and properties. You can now leverage IntelliSense autocompletion and quick info tooltips when editing a CMakeLists.txt file, which will save you time spent outside of the IDE referencing documentation and make the process less error-prone. If you are just getting started with our native support for CMake, head over to our CMake Support in Visual Studio introductory page. You can use CMake to target multiple platforms from the comfort of a single IDE. 

Quick info 

Visual Studio now provides tooltips for CMake commands, variables, and properties based on official CMake documentation. The tooltip appears when hovering over a command, variable, or property name and includes the definition (with optional arguments) and a quick description. Below is the quick info seen when hovering over the add_subdirectory  project command. 

Quick Info for "add_subdirectory" CMake command

IntelliSense completion 

Visual Studio 2019 version 16.1 also improves completion behavior in CMakeLists.txt and provides suggestions for documented CMake commands, variables, and properties. Below are the completion suggestions and tooltips provided when setting the CMAKE_CXX_STANDARD using the set command and CMAKE_CXX_STANDARD variable.  

Completion suggestions and tooltips provided when setting the CMAKE_CXX_STANDARD using the set  command and CMAKE_CXX_STANDARD  variable

Give us your feedback!

Thank you for taking the time to provide the feedback that we use to shape Visual Studio 2019 into the best developer environment for you. We’d love for you to download Visual Studio 2019 version 16.1 Preview 1 and give it a try. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with Visual Studio or MSVC or have a suggestion, you can use the Report a Problem tool in Visual Studio or head over to the Visual Studio Developer Community. You can also find us on Twitter (@VisualC). 

The post In-Editor Documentation for CMake in Visual Studio appeared first on C++ Team Blog.

.NET Core Workers in Azure Container Instances

$
0
0

.NET Core Workers in Azure Container Instances

In .NET Core 3.0 we are introducing a new type of application template called Worker Service. This template is intended to give you a starting point for writing long running services in .NET Core. In this walkthrough you’ll learn how to use a Worker with Azure Container Registry and Azure Container Instances to get your Worker running as a microservice in the cloud.

Since the Worker template Glenn blogged about is also available via the dotnet new command line, I can create one on my Mac and edit the code using Visual Studio for Mac or Visual Studio Code (which I’ll be using here to take advantage of the integrated Docker extension).

dotnet new worker

I’ll use the default from the Worker template. As it will write to logs during execution via ILogger, I’ll be able to tell quickly from looking in the logs if the Worker is running.

public class Worker : BackgroundService
{
    private readonly ILogger<Worker> _logger;

    public Worker(ILogger<Worker> logger)
    {
        _logger = logger;
    }

    protected override async Task ExecuteAsync(CancellationToken stoppingToken)
    {
        while (!stoppingToken.IsCancellationRequested)
        {
            _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now);
            await Task.Delay(1000, stoppingToken);
        }
    }
}

Visual Studio Code’s Docker tools are intelligent enough to figure out this is a .NET Core app, and will suggest the correct Docker file via the Command Palette’s Add Docker files to workspace option.

By right-clicking the resulting Dockerfile I can build the Worker into a Docker image in one click.

The Build Image option will package my Worker’s code into a Docker container locally. The second option, ACR Tasks: Build Image would use Azure Container Registry Tasks to build the image in the cloud, rather than on disk. This is helpful for scenarios when the base image is larger than I want to download locally or when I’m building an application on a Windows base image from Linux or Mac. You can learn more about ACR Tasks in the ACR docs. The Azure CLI makes it easy to login to the Azure Container Registry using the Azure CLI. This results in my Docker client being authenticated to the Azure Container Registry in my subscription.

az acr login -n BackgroundWorkerImages

This can be done in the VS Code integrated terminal or in the local terminal, as the setting will be persisted across the terminals’ environment. It can’t be done using the cloud shell, since logging into the Azure Container Registry requires local shell access so local Docker images can be accessed. Before I push the container image into my registry, I need to tag the image with the URI of the image once it has been pushed into my registry. I can easily get the ACR instance URI from the portal.

I’ll copy the URI of the registry’s login server in the portal so I can paste it when I tag the image later.

By selecting the backgroundworker:latest image in Visual Studio Code’s Docker explorer pane, I can select Tag Image.

I’ll be prompted for the tag, and I can easily paste in the URI I copied from the portal.

Finally, I can right-click the image tag I created and select Push, and the image will be pushed into the registry. Once I have a Docker image in the registry, I can use the CLI or tools to deploy it to Azure Container Instances, Kubernetes, or even Azure App Service.

Now that the worker is containerized and stored in the registry, starting an instance of it is one click away.

Once the container instance starts up, I’ll see some logs indicating the worker is executing, but these are just the basic startup logs and not my information-level logs I have in my Worker code.

Since I added Information-level logs during the worker’s execution, the configuration in appsettings.json (or the environment variable for the container instance) will need to be updated to see more verbose logs.

{
  "Logging": {
    "LogLevel": {
      "Default": "Information",
      "Microsoft.Hosting.Lifetime": "Information"
    }
  }
}

Once the code is re-packaged into an updated Docker image and pushed into the Azure Container Registry, following a simple Restart…

… more details will be visible in the container instance’s logging output.

The Worker template makes it easy to create long-running background workers that you can run for as long as you need in Azure Container Instances. New container instances can be created using the portal or the Azure Command Line. Or, you can opt for more advanced scenarios using Azure DevOps or Logic Apps. With the Worker template making it easy to get started building microservices using your favorite ASP.NET Core idioms and Azure’s arsenal of container orchestration services you can get your microservices up and running in minutes.

The post .NET Core Workers in Azure Container Instances appeared first on ASP.NET Blog.


Your computer is not a black box – Understanding Processes and Ports on Windows by exploring

$
0
0

TCPViewI did a blog post many years ago reminding folks that The Internet is not a Black Box. Virtually nothing is hidden from you. The same is true for your computer, whether it runs Linux, Mac, or Windows.

Here's something that happened today at lunch. I was testing a local DNS Server (more on this on Thursday) and I started it up...and it didn't work.

In order to test a DNS server on Windows, you can go to the command line and run "nslookup" then use the command "server 1.1.1.1" where 1.1.1.1 is the DNS server you'd like to try out. Go ahead and try it now. Run cmd.exe or powershell.exe and then run "nslookup" and then type any domain name. You should get an IP address.

Given that I was trying to run a DNS Server on localhost:53 (Port 53 is where DNS usually hangs out, just like Port 80 is where Web Servers (HTTP) hang out and 443 is where Secured Web Servers (HTTPS) usually are) I should be able to do this. I'm trying to send DNS requests to localhost:53

C:Usersscott> nslookup

Default Server: pihole
Address: 192.168.151.6

> server 127.0.0.1
Default Server: localhost
Address: 127.0.0.1

> hanselman.com
Server: localhost
Address: 127.0.0.1

*** localhost can't find hanselman.com: No response from server
> hanselman.com

Weird, that didn't work. Let me try a DNS Server I know works like Google's 8.8.8.8 public DNS

> server 8.8.8.8

Default Server: google-public-dns-a.google.com
Address: 8.8.8.8

> hanselman.com
Server: google-public-dns-a.google.com
Address: 8.8.8.8

Non-authoritative answer:
Name: hanselman.com
Address: 206.72.120.92

Ok, it seems my local DNS isn't listening on point 53. Checking the logs of the Technitium local DNS server shows this:

[2019-04-15 23:26:31 UTC] [0.0.0.0:53] [UDP] System.Net.Sockets.SocketException (10048): Only one usage of each socket address (protocol/network address/port) is normally permitted

at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at DnsServerCore.DnsServer.Start() in Z:TechnitiumProjectsDnsServerDnsServerCoreDnsServer.cs:line 1234
[2019-04-15 23:26:31 UTC] [0.0.0.0:53] [TCP] DNS Server was bound successfully.
[2019-04-15 23:26:31 UTC] [[::]:53] [UDP] DNS Server was bound successfully.
[2019-04-15 23:26:31 UTC] [[::]:53] [TCP] DNS Server was bound successfully.

The DNS Server's process is trying to bind to TCP:53 and UDP:53 using IPv4 (expressed as localhost with 0.0.0.0:53) and then TCP:53 and UDP:53 using IPv6 (expressed as localhost using [::]:53) but it seems like the UDP binding to port 53 on IPv4 failed. Weird.

Someone else is listening in on Port 53 localhost via IPv4.

That's weird. How can we find out what ports are open locally?

I can run "netstat" and ask Windows for a list of all TCP/IP connections and the processes that are listening on which ports. I'll also PIPE the results to "clip" which will put it in the clipboard automatically. Then I can look at it in a text editor (or I could pipe it through find or findstr).

You can run netstat --help to get the right arguments. I've asked it to tell me the process IDs and all the details it can.

Active Connections

Proto Local Address State PID

TCP 0.0.0.0:53 LISTENING 27456
[dotnet.exe]

UDP 0.0.0.0:53 LISTENING 11128
[svchost.exe]

TCP [::]:53 *:* 27456
[dotnet.exe]

UDP [::]:53 *:* 27456
[dotnet.exe]

Hm, a service is already listening on port 53. I'm running Windows 10, not a Server so it's odd there's already a DNS listener on port 53.

I wonder what service is it?

I can check the Services Tab of the Task Manager and sort by PID. Or can I run "tasklist" and ask directly.

C:WINDOWSsystem32>tasklist /svc /fi "pid eq 11128"


Image Name PID Services
========================= ======== ============================================
svchost.exe 11128 SharedAccess

That's Internet Connection Sharing, and it's used by Docker and other apps for NAT translation and routing. I can shut it down with the sc (service control) or with "net stop."

C:WINDOWSsystem32>net stop sharedaccess

The Internet Connection Sharing (ICS) service is stopping.
The Internet Connection Sharing (ICS) service was stopped successfully.

Now I can start my DNS Server again (it's written in .NET Core) and I can see with tcpview.exe that it's listening on all appropriate ports.

TCPView showing everything on Port 53

In conclusion, it's a good reminder to refresh yourself on the basics of IPv4, IPv6, how processes talk to/allocate ports, what Process IDs (PIDs) are, and their relationships. Much of this is taught in computer science university courses but if you're self taught or not doing low level work every day it's easy to forget.

Virtually nothing on your computer is hidden from you!


Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.



© 2018 Scott Hanselman. All rights reserved.
     

Machine Learning powered detections with Kusto query language in Azure Sentinel

Move your data from AWS S3 to Azure Storage using AzCopy

Microsoft Garage Project Maps SDK Brings 3D Maps to Mixed Reality

$
0
0

Say hello to Maps SDK, a Microsoft Garage project! This latest release brings all the cool things about 3D maps to Unity developers. As a map control for Unity, Maps SDK makes it possible to fold Bing Maps’ 3D world data into Unity-based, mixed reality experiences. This control is drag-and-drop and provides an off-the-shelf 3D map, customizable controls, along with the building blocks for creative mixed reality map experiences.

Space Needle

In partnership with Outings, another Garage project team, the Bing Maps team created a sample app that shows off what this 3D map control can really do. Want to see it in action?

Outings

Check out a sample app experience for Outings that re-imagines the travel mobile app in mixed reality, powered by Maps SDK — available now in the Microsoft Store for mixed reality headsets and HoloLens.

Get the full story at the Garage blog.

Rewrite HTTP headers with Azure Application Gateway

Visual Studio Code now available through Particle Workbench

$
0
0

We’re excited to announce that Visual Studio Code is included in the new release of tooling for Particle IoT developers. Developers using the Particle platform can now use Visual Studio Code as their default editor for building IoT apps!

Particle provides a widely-used IoT platform that consists of hardware, software, and connectivity. At their Spectra conference last year, Particle announced Particle Workbench, a professional IoT developer offering that includes Visual Studio Code.

Particle logo

Particle Workbench and Visual Studio Code provide a free, ready to use experience to develop, program, and debug apps on Particle’s IoT platform, as well as Microsoft Azure.

Particle Workbench and Visual Studio Code are available together through a single, downloadable installer, which include the toolchains and extensions for Particle’s IoT ecosystem, supporting local offline compilation and device programming, or cloud compilation and over-the-air (OTA) device programming. IntelliSense for Particle Device APIs are provided by Visual Studio Code language services and the C/C++ extension. Advanced hardware debugging is available in Visual Studio Code, for actions like setting breakpoints and step-through debugging, all pre-configured for Particle hardware. There’s also access to more than 3,000 official and community Particle libraries, enabling more reusability and less typing.

For more information, take a look at Particle’s announcement. And if you’re interested to learn more about how to use Particle with Microsoft Azure, learn how to create the dashboard of your dreams in this post from Paul DeCarlo.

Happy coding!

The post Visual Studio Code now available through Particle Workbench appeared first on C++ Team Blog.

Announcing Azure Government Secret private preview and expansion of DoD IL5

$
0
0

Azure Government Secret

Enabling government to advance the mission

Today we’re announcing a significant milestone in serving our mission customers from cloud to edge with the initial availability of two new Azure Government Secret regions, now in private preview and pending accreditation. Azure Government Secret delivers comprehensive and mission enabling cloud services to US Federal Civilian, Department of Defense (DoD), Intelligence Community (IC), and US government partners working within Secret enclaves.

In addition, we’ve expanded the scope of all Azure Government regions to enable DoD Impact Level 5 (IL5) data, providing a cost-effective option for L5 workloads with a broad range of available services. With our focus on innovating to meet the needs of our mission-critical customers, we continue to provide more PaaS features and services to the DoD at IL5 than any other cloud provider.

For more than 40 years we have prioritized bringing commercial innovation to the DoD. We also continue to help our customers across the full spectrum of government, including every state, federal cabinet agency, and military branch, modernize their IT to better enable their missions.

Meeting the full spectrum of government data needs table

Microsoft is helping customers across the full spectrum of government, including departments in every state, all the federal cabinet agencies, and each military branch, modernize their IT to better achieve their missions.

Azure Government Secret now in private preview and pending accreditation

Azure Government Secret delivers comprehensive and mission enabling cloud services built with additional controls to support US agencies and partners with workloads classified by the US government at the Secret level. In addition, we’re continuing our commitment to deliver government workloads across the full range of data classifications.

Developed using the same foundational principles and architecture as Azure commercial cloud, the Azure Government Secret regions are built to maintain the security and integrity of classified workloads while enabling fast access to sensitive, mission-critical information. These dedicated datacenter regions are built with additional controls to meet the regulatory and compliance requirements for DoD Impact Level 6 (IL6) and Director of National Intelligence (DNI) Intelligence Community Directive (ICD 503) accreditation.

Azure Government Secret includes two separate Azure regions in the US located over 500 miles apart, providing geographic resilience in disaster recovery (DR) scenarios and faster access to services across the country. In addition, Azure Government Secret operates on secure, native connections to classified networks, with options for ExpressRoute and ExpressRoute Direct to provide private, resilient, high-bandwidth connectivity.

These new regions operated by cleared US citizens are built for IaaS, PaaS, SaaS, and Marketplace solutions, bringing the strength of commercial innovation to the classified space. These secure regions will deliver an experience that’s consistent with Azure Government, designed for ease of procurement and alignment with existing resellers and programs.

“Azure Government Secret will enable us to take applications in legacy IT environments and move them onto a scalable, high-performance platform. This will be a great opportunity to modernize services, making them more efficient and effective for our defense customers.”

Keith Johnson, Chief Technology Officer for the Defense and Intelligence Groups, Leidos

“Microsoft has edge capabilities available now and planned for Azure Government Secret that are just game changers.”

Kim Aftergood, Managing Director, Accenture Federal Services

For more information on the private preview program, Azure Government customers can reach out to their sales representative. Azure Government Secret is available to agencies and their partners with authorized access to a connected US Government classified network.

DoD IL5 scope expands to cover to all Azure Government regions

Based on mission owner feedback and evolving security capabilities, Microsoft has partnered with the DoD to expand the IL5 Provisional Authorization (PA) granted by the DoD to all Azure Government regions. This expanded coverage provides customers with more PaaS features and services at IL5 than any other cloud provider.

This expanded range of PaaS services means mission owners can leverage managed services to be more productive. For example, development teams can use Azure App Service to quickly create cloud apps using a fully managed platform, or Azure SQL Database for a fully managed relational cloud database service that provides the broadest SQL Server engine compatibility.

In addition, mission owners will benefit from decreased latency, expanded geo-redundancy, and additional options for DR and budget optimization. Today, more than 25 services are available across all Azure Government regions at IL5, and these new systems will accelerate access to new IL5 services as they become available in Azure Government.

Customers should note, when supporting IL5 workloads on Azure Government, that isolation requirements can be met in different ways. The isolation guidelines for IL5 workloads documentation page addresses configurations and settings for the isolation required to support IL5 data.

Ensuring compliance requirements are met, audited, and enforced

In addition to rapidly releasing services for the full spectrum of government data, we’re continuing to develop programs to help customers ensure security and compliance requirements are met, audited, and enforced. We recently launched Azure Blueprints, which integrates with Azure Policy to help teams manage and enforce governance for specific compliance outcomes.

Azure Blueprints is a free service that helps customers deploy and update cloud environments in a repeatable manner using composable artifacts such as policies, deployment templates, and role-based access controls. This service is built to help customers set up governed Azure environments and can scale to support production implementations for large-scale migrations. Look for new blueprint services for Azure Government supporting FedRAMP and DoD SRG coming soon.

Helping mission customers unlock the opportunities ahead

With the initial availability of two new Azure Government Secret regions, now in private preview and pending accreditation, the expansion of DoD IL5 coverage to all Azure Government regions, and the extended Azure Blueprints program, we’re continuing our investments in innovation, security, and compliance to help customers across the full spectrum of government.

Microsoft enables the digital transformation of government by offering effective, modern, enterprise-class cloud capabilities. We are dedicated to helping our government customers accomplish critical missions with innovative and trusted cloud, productivity, and mobility solutions. We support nearly 10 million US government cloud professionals across more than 7,000 government agencies and remain committed to delivering the highest level of security and compliance necessary to meet their unique needs. 


Azure Container Registry now supports Singularity Image Format containers

$
0
0

Azure and Sylabs announced today a new collaboration which enables Singularity container images to be stored in registries supporting the Open Container Initiative (OCI) Distribution Specification. Singularity version 3.0 defines a new secure Singularity Image Format (SIF).

Azure Container Registry supports storing Helm, CNAB, and other cloud native artifacts in OCI distribution based registries, by working with the OCI Registry as Storage (ORAS) project as a common library to enable various artifact types to be stored. Leveraging the same common library, Singularity Image Format container images can now be stored in Azure Container Registry and other OCI distribution-based registries.

Compliance with standards emerging from the Open Containers Initiative (OCI) has been a matter of emphasis in some of our most-recent releases of Singularity,” stated Singularity founder and Sylabs CEO Gregory Kurtzer. “In fact, Singularity is compliant with both the image and runtime specifications championed by the OCI. To really drive adoption of these standards however, the matter of distributing containers also needs to be addressed. Fortunately, ORAS addresses this significant gap, and significantly lowers the barrier to widespread enterprise adoption. We are delighted to be collaborating on an ongoing basis with Azure to ensure that Singularity is ‘ORAS aware’. Through our initial efforts, SIF container images can now be stored and retrieved in Azure Container Registry as well as other OCI distribution-based registries. For those seeking to leverage standards-compliant containers in Azure Container Registry, support for ORAS via Singularity represents a significant advancement.”

Sylabs and the Singularity community have always been focused on interoperability and this new integration extends the concept to create a broader solution-set for the container community. For customers that already use Azure Container Registry or other OCI distribution-based registries, this new collaboration will allow for an integrated path towards adopting SIF containers in their workflows.

The work done in collaboration with Sylabs enables customers using Singularity to leverage their investments in Azure Container Registry and other OCI complaint registries, without having to run and maintain another SIF distribution library.

Learn more by visiting, “Using OCI Compliant Registries as Artifact Registries” on GitHub.

Microsoft driving standards for the token economy with the Token Taxonomy Framework.

$
0
0

Today’s announcement of the Token Taxonomy Initiative (TTI) is a milestone in the maturity of the blockchain industry. 

The initiative brings together some of the most important blockchain platforms from the Ethereum ecosystem, Hyperledger and IBM, Intel, R3, and Digital Asset in a joint effort to establish a common taxonomy for tokens. Also joining are other standards bodies like FINRA, enterprises like J.P. Morgan, Banco Santander, and ING and companies pushing the boundaries in blockchain like ConsenSys, Clearmatics, Komgo, Web3 Labs, and others.

Over the past year, the Azure Blockchain engineering team has been working to understand the breadth of token use cases and found that a lack of industry standards was driving confusion amongst our enterprise customers and partners. We started building the Token Taxonomy Framework to help address this confusion, establish a base line understanding, and a path forward for our customers and partners to begin exploring use of tokens. We quickly realized that our efforts would be much more effective if we didn’t work in isolation, so we chose to contribute the framework and partner with our counterparts across the industry to expand the TTF and seed the industry with a common standard. As the Principal Architect for Azure Blockchain and an EEA Board Member, I will represent Microsoft in the release of the 1.0 framework and will act as the chair of the TTI, collaborating with all participants to ensure that the outcome establishes a foundation to rapidly accelerate the token economy.

Taxonomy

A core principle driving this initiative is platform neutrality, which will ensure the standards we outline are agnostic to any company and empower the industry to innovate openly. This workgroup brings together a diverse set of thought leaders from across the blockchain community, including public cloud platforms, blockchain start-ups, and early adopters across industries. Each of us recognize both the power of the token economy and the challenges facing businesses looking to innovate in this nascent space. We hope that our initial work seeding the Token Taxonomy Initiative will provide a starting point for the community to build upon in the coming month by providing:

  • A definition of tokens and their use cases across industries.
  • A common set of concepts and terms that can be used by business, technical, and regulatory participants so they can speak the same language.
  • A composition framework for defining and building tokens.
  • Create a Token Classification Hierarchy (TCH) that is simple to understand.
  • Tooling meta-data using the TTF syntax to be able to generate visual representations of classifications and modeling tools to view and create token definitions mapped to the taxonomy eventually linking with implementations for specific platforms.
  • A sandbox environment for legal and regulatory requirement discovery and input.

While not specific to the Ethereum family of technologies, this work does draw from the working group’s experience building with the Ethereum ecosystem. As chair of the TTI, I invite everyone to participate and learn about the taxonomy as it is rolled out in the coming months and look forward to the continued innovation in this space.

New Program for Business Applications ISVs

$
0
0

Note that this blog post was original posted to: https://www.linkedin.com/pulse/new-program-business-applications-isvs-steven-guggenheimer

 

The growth of data and the evolution of software as a service (SaaS) applications is driving digital feedback loops that are changing how people do their work and drive their businesses. We’ve been through many iterations of both our infrastructure and our approach to building and delivering solutions. The journey for Office from client, to client-server, to SaaS has been well documented as has the evolution from our Windows Server family of offerings to Azure. With the April ’19 release of Power Platform and the updated Dynamics 365 offerings, the business application family is now well into its transformation from client & client-server offerings to a modern SaaS platform (Power Platform) and modern SaaS service (Dynamics 365). This transformation is also well documented.

As we make the Platform and SaaS transformation in this space, the time is right to modernize our independent software vendor (ISV) approach and program to support ISVs in our joint effort to accelerate deployment of PowerApps and Dynamics 365-based solutions. From how we provide better tools in the platform for ISVs and developers, to new program offerings, and alignment with Microsoft’s broader ISV efforts, we are looking holistically at how we can better serve our ISV partners. In talking with ISVs over the last few months there is excitement for the future while acknowledging that changes are needed to achieve it.

As a platform company we know our success is determined by the health of the ecosystem and as such we wanted to give you a preview of the work we are doing that will start rolling out in July. As a kickstart to complement this blog post, check out our updated ISV website at https://partner.microsoft.com/solutions/business-applications/isv-overview.

New Program Overview

We are launching a new program in July 2019 (with more details at Inspire) for business applications ISVs and Microsoft to jointly serve our common customers. We’ve heard clearly from the ecosystem that there is a willingness to invest with us in sales and marketing efforts to help grow our collective market and that similar programs work well. We will be introducing a new two-tiered revenue sharing program (standard and premium) to deliver ISV priorities for engineering support, co-marketing and co-selling. This program will become the baseline for Dynamics 365 and PowerApps ISVs, and we’ve published an overview which can be found here.

At a high level all paid Dynamics 365 Customer Engagement and Dynamics 365 Finance and Operations apps and PowerApps will pay a revenue sharing fee, a portion of which will be used to drive joint sales and marketing efforts. In the future, we expect to add more advanced technical benefits and marketing support. Eligible ISVs that want even more GTM support can request to participate in a premium tier with expanded marketing benefits, including co-sell materials, promotions, PR support and more, in addition to co-selling support from Microsoft field teams.

ISV Focused Engineering

At the core, the separation of our first party applications from the underlying platform, which serves as the basis for our own SaaS services, is a huge step forward. Now, ISVs can leverage this underlying platform. This effort was the topic of a previous blog post and an exciting key to enabling ISVs to build or extend modern Line of Business SaaS offerings. Whether “connecting” to new or existing solutions, “building” new solutions from the ground up, or “extending” existing Dynamics 365 SaaS offerings, we are enabling new capabilities that are specific to developers/ISVs. Some of the new capabilities include:

  • Self-Service quality check for app certification – ISVs independently can verify that the apps they are building for Dynamics 365 Customer Experience and PowerApps can run seamlessly through the AppSource onboarding process and get certified without delay. Going forward all new apps will need to be certified and existing apps will need to be recertified periodically to keep them up to date.
  • ISV Studio – ISVs who have published Dynamics 365 Customer Engagement apps or PowerApps to AppSource can enjoy the benefits of a new ISV-centric Studio experience. The ISV Studio is critical in providing SaaS-like experiences to our partner ecosystem and providing ISVs with a consolidated view into how their apps are performing across their installed base.

Knowing how to get started on a platform should be easy and we are investing in self-serve materials for partners to get started with less friction than today. As we get closer to Build 2019 you will learn about new capabilities that we are lighting-up for Canvas and model driven app development, Common Data Service and Analytics & AI.

App Ingestion & Marketplace

We are simplifying what it takes for ISVs to submit their apps and improving how customers discover them. The first phase is to standardize our app ingestion process by streamlining the submission process from the Cloud Partner Portal (CPP), AppSource, DevCenter, Partner Sales Connect (PSC) and others into a unified Partner Center solution. This will reduce the complexity of submitting an app for ISVs while making it easier for ISVs to collaborate with Microsoft sellers. In addition to changes in how apps are submitted, we are making improvements to our Business Applications marketplace – Microsoft AppSource. Driving all the apps through a single marketplace will enable us to focus our efforts into creating a better experience for users while supporting our partners to succeed. Some of the things we are doing here are improving app discoverability, enhancing app ratings and reviews, consistently applying categories across apps, improving the user experience, and offering new transaction capabilities. Note that we will also be driving a consistent quality bar so that users can be confident in the apps that they get from Microsoft AppSource.

Sales & Marketing

From conversations with several ISVs there is a desire to grow our joint opportunity with customers. The new program has been designed to invest in the success of our ISVs by funding sales and marketing benefits for growing their business. Some of the benefits available (depending on an ISV’s tier) include co-sell ready materials, a Microsoft case study, a 20-30 second commercial, PR support, tele-sales campaign, workshops, marketing tools and more.

For ISVs that connect with multiple product groups at Microsoft, there can be uncertainty in knowing who to contact for assistance on sales opportunities. To simplify this, we are aligning the Business Applications sales and marketing formation and cadence, making it easier to work with us on marketing plans if eligible for this benefit. Supporting this alignment will be dedicated partner development managers and technical support roles who are deep in the Dynamics 365 and PowerApps businesses. Their financial incentive and metrics will be updated to drive a deeper focus on business applications. ISVs focused on industry verticals will also benefit from being connected to our industry teams which will provide a deeper reach into key companies in these industries.

Conclusion

ISVs are a critical component of the Business Applications ecosystem and we value their partnership. Changes to the engagement model take time to process for each organization so we are announcing them today to give ISVs an opportunity to assess what it means for them and to work with their Microsoft contact for next steps before the July launch. To stay updated on this partner program signup for updates here. We will also have a partner readiness webinar to learn more about the new business model along with the benefits package that will be part of the new program.

Registration links for the partner overview webinar:

As always, we value the partnership and look forward to working together to serve our joint customers and to grow the Business Applications opportunity.

Cheers,

Guggs

How to develop an IoT strategy that yields desired ROI

$
0
0

Manufacturing decision makers aligning processes and data for better ROI.

This article is the second in a four-part series designed to help companies maximize their ROI on the Internet of Things (IoT). In the first post, we discussed how IoT can transform businesses. In this post, we share insights into how to create a successful strategy that yields desired ROI.

The Internet of Things (IoT) holds real promise for fueling business growth and operational efficiency. However, many companies experience challenges applying IoT to their businesses.

In an earlier post, we discussed why and how to get started with IoT, recommending that companies shift their mindset, develop a business case, secure ongoing executive sponsorship and budget, and seize the early-mover advantage. In this post, we’ll cover the six elements of crafting an IoT strategy that will yield ongoing ROI.

1. Have a vision of where you’re headed

IoT leaders benefit from having a vision for where they’re headed and how to commercialize IoT, whether it’s improving the customer experience, redesigning products, expanding a service business, or driving operational excellence. As with any business vision, making it a reality is a long game. IoT leaders and teams will gain insights slowly over a series of projects that stairstep to more significant gains.

“The advice I would give any organization is first and foremost, understand the problem. Fall in love with the problem, not the solution,” says Shane O’Neill, enterprise infrastructure architect and IoT lead for Rolls-Royce, in the Unlocking ROI white paper. Rolls-Royce has used IoT to transform their services business.

That’s sound advice because digital transformation isn’t easy. According to McKinsey, the first 15 or so IoT use cases typically provide modest payback but enable companies to develop the expertise they need to expand IoT’s footprint in their business. For IoT leaders, that can mean cost savings and new revenue gains of 15 percent or more.

2. Define what ROI means to you

It can be difficult to calculate the ROI for IoT projects because there are so many variables, and business processes that don’t exist in isolation. However, doing so will enable cross-functional IoT teams to win and keep executive sponsorship and demonstrate progress over time.

Here are some of the types of value companies are realizing on their IoT investments—gains that could be part of your ROI rationale. They include:

  • Avoiding unnecessary production costs by minimizing operational downtime and extending the usable lifespan of machinery.
  • Reducing production costs by capitalizing on automated processes, remote monitoring, proactive repair and replacement, and fewer break-fix incidents.
  • Protecting assets by securing costly, and multi-million-dollar equipment from diversion and theft.
  • Enabling smarter decision-making with data analytics that include edge insights, process automation, artificial intelligence (AI), and machine learning.
  • Optimizing energy use by identifying sources of waste and prioritizing sustainability initiatives.
  • Revolutionizing product and service development through access to test-and-learn processes, highly accurate customer analytics, brand-new digital-physical products, and subscription-based services.
  • Enabling customizations of products at the point of sale or later in the service lifecycle after customers have gained some experience with them.
  • Getting a competitive advantage, with the ability to execute rapidly based on real-time insights and connected services.

3. Get everyone on the same team

Ideally, IoT is an enterprise-wide collaborative effort that involves senior decision-makers, IT, operations technology, and lines of business. IT and operations can collaborate closely to determine how IoT devices and systems will be connected to each other, digital platforms and networks, and partners. They also need to decide how they will be monitored, managed, and secured.

Getting everyone aligned around the path forward helps companies avoid the temptation of connecting devices and running projects in isolation. Although new IoT platforms empower the business and IT alike to pilot projects, executing a series of independent efforts could invite technology chaos into the organization. Connected devices and IoT systems introduce a myriad of new endpoints that need to be managed appropriately and at scale to avoid creating cyber gaps and introducing the opportunity for data breaches.

Similarly, IoT leaders can communicate a plan for when and how they will serve the different lines of business and win their patience and cooperation. For lines of business, the wait could be years, not months, for an IoT project. Help business executives understand the strategic reasons, corporate priorities, better execution, and efficient scaling, among them.

4. Align strategy to real needs

When starting with IoT, it’s tempting to set a big and audacious goal. Yet, the reality is that companies will probably have more success if they start with something small and quantifiable and quickly solvable, and then build on it.

Take for example, a commercial fleet or logistics company that needs to improve its ability to locate its vehicles. By using IoT and GPS, workers can stage vehicles for maximal usability, stop wasting time searching for cars, and optimize the throughput of the fleet.

Over time, this same company could measure more of its data (vehicle speeds, starts and stops, turns, time to load and unload, and fuel use) to test new processes and institutionalize them. Employees could plan truck routes to maximize right turns, saving time and fuel use, service vehicles proactively to avoid flat tires, oil loss, and other issues, sequence arrivals to speed loading and unloading, and more. This is how the savings from IoT data, analytics, and reporting add up to big gains.

5. Collect only the data you need

Because of IoT’s ability to optimize processes, it’s tempting to connect everything and pan for gold in the torrents of data that result. However, the reality is that businesses analyze only a fraction of the data they possess.

Companies new to IoT as well as those that lack a data management practice often take time to analyze the data they really need—and whether they currently have access to it. If they do, the next step is to focus on data collection. Do you have access to the right information, or do you need strategy to collect something new? And be specific. Too much data can create unnecessary noise, making it difficult to understand and isolate what actually improved processes or why it didn’t.

Conversely, if companies don’t possess that data, they may need to commit to a phase zero data collection effort, connecting devices and waiting an appropriate period of time to create the historical trend and real-time data they will need to truly understand their processes.

6. Consider starting with services to prove the value of IoT

Today, IoT initiatives fall into two buckets. The first is to improve operational efficiency. But the more powerful and emerging trend is evolving to become a managed service provider. That’s because IoT data provides value that the business and customers can see, aligning partners around making improvements. In fact, optimizing services is the number one strategic IoT priority for companies today, according to McKinsey.

Rolls-Royce manufactures engines for commercial aircrafts, some 13,000 of which are in service around the world. Rolls-Royce has forged deeper connections with its customers and delivered real value by using IoT to help service their customer engines. The company uses the Microsoft Azure IoT platform and Azure AI to collect terabytes of data from large aircraft fleets, analyze them for operational anomalies, and plan relevant actions. Rolls-Royce’s services help airlines trim fuel consumption, service parts or replace them when needed, and minimize unplanned downtime that could cost millions of dollars across fleets.

“The Microsoft Azure platform makes it a lot easier for us to deliver on our vision without getting stuck on the individual IT components. We can focus on our end solution and delivering real value to customers rather than on managing the infrastructure,” says Richard Beesley, Senior Enterprise Architect of Data Services for Rolls-Royce.

Using IoT to increase efficiency

Although IoT can have almost limitless applicability to the business, its greatest value is helping companies use data to grow and operate with ruthless efficiency.

Consider this tale of two companies: Both have exceptional products that offer comparable new business capabilities. However, the first company has a reactive business model, with limited interactions with customers after the product buy. It’s still relying on a customer-initiated, break-fix service model.

The second company uses IoT to move further into its customers’ businesses, offering insights into how its products can be used for maximal value, automating manual processes, scheduling servicing proactively, and providing insight into other processes that can be fine-tuned for new business gains.

It’s easy to see which company is best positioned to cross-sell and upsell new products from its position as trusted partner. It’s easy to see which company will seize shares from its competitors and triumph in the digital economy. That’s why now is the time to lead—not lag—with IoT.

Need help? Read this white paper on how to maximize the ROI of IoT.

Download the white paper.

Azure resources to assess risk and compliance

$
0
0

This blog post was co-authored by Lucy Raikova, Senior Program Manager, Azure Global – Financial Services.

It is vital for our customers in the Financial Services Industry to deliver innovation and value to their customers while adhering to strict security and regulatory requirements. We at Microsoft Azure know this, and we understand the complexities of trying to innovate fast and effectively, while also ensuring that key regulations and compliance necessities are not overlooked. Azure is uniquely positioned to help our global FSI customers meet their regulatory requirements. Most customers, and likely the entire FSI, need to identify risks and conduct a full risk assessment before committing to any cloud service. This is often mandated by internal risk policies or external regulations, and we agree it is a critical security practice to do the due diligence of assessing a cloud service provider’s (CSP) ability to comply with strict regulations. This will validate the competence of a CSP to enable the privacy, security, access, and continuity of their cloud environment and downstream customer data in cloud.

Microsoft provides rich set of solutions and resources to help you assess and manage your compliance risk as you evaluate moving to the Microsoft cloud. One of the main goals of a general risk assessment is to guarantee that the migration of a system or data to the cloud will not introduce new or unidentified risks into your organization, or at the least, identify those new risks so that they can be appropriately managed to avoid costly fines or loss in revenue due to system downtime. The focus continues to be to ensure that the security, privacy and control, compliance, and transparency requirements are met, and to keep identified risks below the internal risk appetite threshold. By leveraging these solutions and resources, our most highly regulated FSI customers can efficiently and comprehensively document their compliance and regulatory footprints, while also pushing the boundaries of innovation so that their customers’ experiences continue to evolve and improve.

Self-service resources

There’s a wealth of self-service resources which comes with an active Azure subscription. Let’s walk through some of those we most commonly recommend for various functions in Financial Services organizations.

Service Trust Portal

The Service Trust Portal (STP) helps with self-service audits and compliance by providing deeper technical trust, security, privacy and compliance information. Through the STP customers can access information like Microsoft’s security reports, whitepapers (PCI, SEC 17a-4, EBA etc.), Microsoft's Compliance county checklists, and independent third-party audit reports about Microsoft online services. To access some of the resources on the STP, login is required as an authenticated user with a Microsoft cloud service account.

Audit reports

Microsoft is regularly audited and submits self-assessments to third party auditors. We perform in-depth audits of the implementation and effectiveness of security, compliance, and privacy controls. These independent third-party audit reports about Microsoft online services and information about how they can help your organization maintain and track compliance with standards, laws, and regulations such as International Organization for Standardization (ISO), Service Organization Control (SOC), National Institute of Standards and Technology (NIST), Federal Risk and Authorization Management Program (FedRAMP), General Data Protection Regulation (GDPR) are available on the STP.

Compliance Manager

Microsoft also offers a set of integrated solutions that leverage AI to help improve data protection capabilities and overall compliance posture. Compliance Manager enables you to manage your compliance activities in a single dashboard and provides three key capabilities:

  • Risk assessment: The tool helps you track, assign, and verify your organization's regulatory compliance activities related to Microsoft Cloud services,  With a single dashboard, you can see multiple assessments and measure the compliance performance for a cloud service against a regulation or a standard (Ex- ISO 27001, ISO 27018, FedRAMP, NIST, GDPR)
  • Compliance score: With each assessment you get a compliance score which gives you visibility into your compliance performance.
  • Recommendations: Recommends how to address control gaps, to improve data protection, and to prioritize your tasks.

Penetration testing

Microsoft regularly conducts penetration testing and vulnerability assessments as required by the Microsoft Security Development Lifecycle (SDL), Payment Card Industry (PCI), FedRAMP and/or ISO 27001 certification. Microsoft security practices and the ongoing SDL processes enable the service to rapidly mitigate new threats and attacks, and protect customer data. Testing standards have also included Open Web Application Security Project Top 10 and CREST-certified testers, Fuzzy testing and port scanning of the endpoints. As part of the ongoing risk management program, test results are resolved, and the resolution is validated as part of the compliance program.

Additional Azure resources

Azure also provides you and your LOB, domain risk (e.g. Operational Risk),  IT/Ops and DevOps teams additional resources that help control and monitor your infrastructure on Azure cloud, and help you be more secure and compliant:

Azure Monitor

Azure Monitor maximizes the availability and performance of applications by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from the cloud and on-premises environments. It helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on.

Azure Security Center

Azure Security Center is a unified infrastructure security management system that strengthens the security posture of your data centers and provides advanced threat protection across your hybrid workloads in the cloud, be it Azure, any other cloud, or on-premis.

Azure Sentinel

Azure Sentinel is a scalable, cloud-native security information and event manager (SIEM) platform that uses built-in AI to analyze large volumes of data across the enterprise from all sources in a few seconds at a fraction of the cost. It includes built-in connectors for easy onboarding of popular security solutions, and allows you to collect data from any source with support for open standard formats like CEF and Syslog.

Azure Service Health

Azure Service Health provides personalized alerts and guidance when Azure service issues affect our customers’ business. It can notify you, help you understand the impact of issues, and keep you updated as the issue resolves. It can also help prepare for planned maintenance and changes that could affect the availability of your resources.

Azure governance

Governance validates that your organization can achieve its goals through an effective and efficient use of IT. It meets this need by creating clarity between business goals and IT projects. With Azure you build and scale your applications quickly while maintaining control.

Azure Blueprints

Azure Blueprints enable quick, repeatable creation of fully governed environments. This service helps you deploy and update cloud environments in a repeatable manner using artifacts such as policies, resource groups, deployment templates, and role-based access controls. This service is built to help devOps set up governed Azure environments and scale to support production implementations for large-scale migrations. Azure recently announced Blueprint for compliance standard ISO 27001.

Azure Policy

Azure Policy helps you govern Azure resources by creating, assigning and managing policies. These policies enforce different rules and effects over your resources that help them stay compliant with your corporate standards and service level agreements, and this management and security can be applied at scale.

As you can see Microsoft provides our customers multiple resources and tools to keep pace with the compliance and regulation restrictions in the 54 regions Azure operates, and we are just getting started! Our goal is to continue to be the cloud platform with most comprehensive compliance coverage in the industry.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>