Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Azure Monitor alerting just got better!

$
0
0

In March 2018, we announced the next generation of alerts in Azure. Since then, we have received overwhelming feedback from you appreciating the new capabilities and providing asks for the next set of enhancements. Today, I am happy to announce exciting new developments in Azure Monitor alerts.

One Alerts experience

The unified alerts experience in Azure Monitor just got better! We are introducing the unified experience in all major services in Azure, complemented with the One Metrics and One Logs experience to provide you quick access to these capabilities.

As part of the alerts experience, we’re introducing new ways to visualize and manage alerts, providing a bird’s eye view to all alerts across subscriptions by severity, a drill down view into all the alerts and a detailed view to examine each alert. This is complemented by Smart Groups (preview), which automatically consolidates multiple alerts into a single group using advanced AI techniques. Using these capabilities, you can troubleshoot issues in your environment quickly.

Screenshot_1

 

Screenshot_2

Expanded coverage

We are expanding alerting coverage to include more Azure services including web apps, functions and slots, custom metrics, and standard and webtest for Azure Monitor Application Insights.

The alerts experience now also provides a single pane of glass for viewing fired alerts for enumerating across your environment - whether you use Azure Monitor, SCOM, Nagios or Zabbix to manage your environment. This experience is backed by a new unified API for enumerating alerts, which you can use programmatically to integrate against other tools you use.

Screenshot_3

We’ve also heard from you for the need to monitor for complex conditions across your stack. We are enabling alerts across logs generated by App Insights and Log Analytics. Using this capability, you can create alert rules across logs generated by infrastructure/PaaS services and your app by defining queries that span multiple workspaces or apps or both in the alert rule.

Screenshot_4

Defining rules at scale

One consistent piece of feedback we’ve received from you is to allow for defining a single alert rule that targets multiple resources. Today, we are introducing multi-resource metric alert rules to allow you to do just that! As part of the rule, you can specify a list of resources, a resource group or a subscription as a target. All of the resources will be individually monitored for the criteria specified in the rule. You can also define an alert rule with “*”, which allows you to auto grow and shrink as resources are added in your environment targeted by the rule. We are currently supporting this capability for Virtual Machines only, but will soon expand this to other resource-types. Watch this space for more!

Screenshot_5

Alerting on multiple dimensions? We’ve got that covered too! As part of alerting at scale, we are allowing you the ability to define multiple dimensions that should be monitored using a single alert rule. Again, we are supporting * in this scenario too, to cover newly added or removed dimension values in the rule.

Screenshot_7

Faster log alerts is now Generally Available

As part of today’s announcements, we are also declaring GA of metric alerts for logs. This capability allows you to convert specific log types in log analytics (heartbeat, perf counters, events and update) to metrics and create alert rules on these metrics. This brings down the latency to generate a log alert to under 5 minutes.

Alerting on multiple dimensions is also supported for metric alerts for logs - allowing you to monitor multiple computers in your log analytics workspace in a single alert rule!

Screenshot_6

The Azure monitor alerts team is excited to bring these new enhancements to you! We would love to hear your feedback as you deploy these capabilities in your environment. You can reach us at azurealertsfeedback@microsoft.com. We look forward to hearing from you.


Azure Monitor is providing a unified logs experience

$
0
0

We’re happy to provide a new, unified log search and analytics experience for Azure Monitor logs, as announced earlier this week. Azure Monitor logs is the central analytics platform for monitoring, management, security, application, and all other log types in Azure. The new logs experience, embedded directly in Azure Portal, integrates the capabilities offered so far through different pages and portals. It provides a single place where you can run both basic searches over logged data, as well as advanced queries that provide diagnostics, root-cause analyses or visualizations. Azure Monitor logs is based on the same Log Analytics data and query engine that many of you have already been using.

Another major improvement is the coming integration of the logs experience with Azure resources, starting with Virtual Machines. This means that instead of leaving the VM you work on to launch Azure Monitor, you can access its logs directly through the VM’s monitoring menu, just like you do for alerts and metrics. When opening logs through a specific resource, you are automatically scoped to log records of that resource only (unlike launching logs through Azure Monitor which has the wider scope of the entire selected workspace). We are working across Azure to ensure that this experience is available for every Azure resource. Note that some options like Query Explorer and Alert integration are not yet available through the resource view, and we will be adding them soon.

The new logs experience

The logs experience is designed to help you get the most of your data – starting with a clear view of your logs and running simple searches, all the way to creating customized advanced queries that you can rely on for your production alerting and dashboarding systems.

Azure Monitor Logs

Find your way around your logs

The amount of log data collected can be enormous. The new experience offers a set of query examples that can help you get started. As results show up, suggested filters would be displayed, created dynamically based on your result set, so you can easily slice-and-dice the data and zoom in on relevant logs.

Write advanced queries

To get the best insights on your data, you may want to write your own queries. To make query editing easier, logs exposes a full schema view of your data (tables, fields and data types), provides syntax highlighting and IntelliSense (language auto-completion), and a query explorer to access your queries, as well as queries provided by Azure to help you get started. If you’re using multiple workspaces, you can use the workspace selector to quickly switch between them, or even query a different workspace in each tab.

Utilize queries in various ways

Once you’ve created a query that provides meaningful data, you may want to keep tracking that over time or react to changes in the query results. To accomplish that, you can create Azure Monitor alerts based on log queries, pin queries to Azure dashboards, export them to power BI or simply share the query link with a colleague.

We invite you to take part and provide your feedback directly to LAUpgradeFeedback@microsoft.com.

Rich insights for virtual machines from Azure Monitor

$
0
0

At Ignite we announced the public preview of Azure Monitor for VMs, a new capability that provides an in-depth view of VM health, performance trends, and dependencies. You can access Azure Monitor for VMs from the Azure VM resource blade to view details about a single VM, from Azure Monitor to understand compute issues at scale, and from the Resource Group blade to understand whether all the VMs in a common deployment are behaving as you expect.

Azure Monitor for VMs brings together key monitoring data about your Windows and Linux VMs, allowing you to:

  • Monitor the health and availability of VMs, with customizable alert thresholds
  • Troubleshoot guest-level performance issues and understand trends in VM resource utilization
  • Determine whether back-end VM dependencies are connected properly, and which clients of a VM may be affected by any issues the VM is having
  • Discover VM hotspots at scale based on resource utilization, connection metrics, health signals and alerts

Health

Health capabilities in Azure Monitor for VMs include out of the box configurable VM Health criteria that are powered by the same health modeling services used internally across Microsoft.

Health gives you powerful views to VM availability signals, including how many VMs are in a critical or warning state (or not able to connect to the monitoring service), which VMs by OS or resource type are reporting health issues, and details on health problems with CPU, disk, memory, and network adapters.  You can quickly and proactively identify the top issues with VMs, configure near-real time alerts on health conditions, and link to Knowledge Base articles to remediate issues.

Health

Maps

Azure Monitor for VMs includes dependency maps powered by the existing OMS Service Map solution and its Azure VM extension. Maps deliver a new Azure-centric user experience, with VM resource blade integration, Azure metadata, and dependency maps for Resource Groups and Subscriptions. Maps automatically show you how VMs and processes are interacting, identify surprise dependencies to third party services, and monitor connection failures, live connection counts, network bytes sent and received by process, and service-level latency.

In addition to the visual experience and group-level mapping in the user experience, you can now query VMConnection events in Log Analytics to alert on spikes in network traffic from selected workloads, query at scale for failed dependencies, and plan Azure migrations from on-prem VMs by analyzing connections over weeks or months.

The Service Map solution itself is not deprecated. Over time we expect to migrate users away from the existing OMS-based user experience and into the richer, integrated experiences of Azure Monitor.

Map

Performance

Performance views are powered by Log Analytics, and offer powerful aggregation and filtering capabilities including “Top N” VM sorting and searching across subscriptions and regions, aggregation of VM metrics (e.g. average memory ) across all VMs in a resource group across regions, percentiles of performance values over time, and breakdown and selection of VM Scale Set instances.

If you’ve had challenges isolating which of your thousands of VMs are resource constrained, quickly troubleshooting logical disk or memory consumption issues, or getting performance diagnostics where and when you need it while performing administration on your VMs, we’d love to get your feedback on these new Performance experiences.

Perf

 

Virtual Machine Details

Getting started

If you’re running VMs in an on-prem or hybrid environment, or using Azure VM Scale Sets, you can use the Performance and Map capabilities from the “Virtual Machines (preview)” menu of Azure Monitor to find resource constraints and visualize dependencies. The Health capability currently supports Azure VMs, and will be extended in the future to support other resource types.  For the full list of supported OSes, please see our documentation page.

To get started, go to the resource blade for your VM and click on “Insights (preview)” in the Monitoring section. When you click “Try now”, you’ll be prompted to choose a Log Analytics workspace, or we can generate one for you. Once you’re comfortable with the capabilities on a few VMs, you can view VMs at scale in Azure Monitor under “Virtual Machines (preview)”, and on-board to entire resource groups and subscriptions using Azure Policy or using Powershell.

DayZero

Check out our full documentation to get more details. We’d love to hear what you like and don’t like about Azure Monitor for VMs, and where you’d like us to take it. Please click “Provide Feedback” in the user experience to share your thoughts with us.

Serial console for Azure VMs now generally available

$
0
0

Earlier this year, we released the public preview of the serial console feature of Azure Virtual Machines (VMs). Today we are announcing the general availability of serial console and we’ve added numerous features and enhanced performance to make serial console an even better tool for systems administrators, IT administrators, DevOps engineers, VM administrators, and systems engineers.

For those new to serial console, you’ll likely recognize this scenario: You’ve made a change to your VM that results in you being unable to connect to your VM through SSH or RDP. In the past, this would have left you pretty helpless. However, serial console enables you to interact with your VM directly through the VM’s serial port – in other words, it is independent of the current network state, or as I like to say, it’s “like plugging a keyboard into your VM.” This means that you can debug an otherwise unreachable VM to fix issues like a broken fstab or a misconfigured network interface, without needing to resort to deleting and recreating your VM.

loading

The latest features for serial console include a subscription-level enable/disable feature, support for magic SysRq keys, support for non-maskable interrupts, accessibility improvements, and performance and stability improvements. Serial console can now be turned off for all VMs in a subscription. The details on this feature are in our documentation for Linux and Windows VMs.

The system request key, also known as the magic SysRq key, allows you to enter key combinations that will be understood by the Linux kernel regardless of its state. Like a SysRq, a non-maskable interrupt (NMI) can forcibly crash your VM, which is useful if your VM is completely frozen. In other words, we’re giving you an additional tool in your toolkit for even more of those nasty debugging situations. More details are available in our documentation for Linux and Windows VMs.

We have also focused on making the serial console more accessible for anyone with visual, hearing or physical impairments. Without the need to use a mouse, you can now tab through serial console to show exactly where you are on screen. Serial console also includes native screen reader support, meaning that your screen reader will be able to tell you exactly what is going on within your serial console.

You can see serial console in action in this episode of Azure Fridays and learn more by visiting our serial console documentation for Linux and Windows VMs.

Azure ISVs expand the possibilities with Azure Stack

$
0
0

Azure Stack features a growing independent software vendor (ISV) community that operates across a broad-spectrum of environments, empowering you to create compelling and powerful solutions.

Today, I’d like to highlight some of our ISV partners that address common customer requirements for Azure Stack.

Data protection and disaster recovery

Operators and users of Azure Stack deploying applications and datasets need the ability to quickly recover from data loss and catastrophic failures. With offerings from multiple partners, you can enable data protection and disaster recovery for your applications and data. Supported partners include: Acronis, Actifio, Carbonite, Commvault, Dell EMC, MicroFocus, Quest, Rubrik, Veritas, and ZeroDown. The blog post, Protecting applications and data on Azure Stack, by my colleague Hector Linares, provides an overview of how to protect your applications and data.

Security

Vulnerability and policy compliance scanning for the Azure Stack infrastructure is now a reality thanks to the integration of Azure Stack with Qualys. The Qualys virtual scanner appliance is also coming to the Azure Stack marketplace to enable users to protect their workloads. Backing up your application’s secrets on a Hardware Secure Module (HSM) will soon be available thanks to the Azure Stack marketplace solution CipherTrust Cloud Key Manager (CCKM) by Thales, which allows you to interface an HSM with your Key Vault running on Azure Stack. Auditing is a fundamental part of security. Azure Stack now exposes a Syslog client that forwards security logs, alerts and audits which you monitor using the EventTracker SIEM.

Networking

Azure Stack’s Marketplace has best-in-class virtual network appliance solutions that compliment and extend Azure Stack’s base networking functionality. Take advantage of high-speed site-to-site VPN gateway solutions from Arista, F5 and Palo Alto Networks and that allow you to achieve up to 10x faster throughput than the native VPN Gateway resources in Azure Stack today. You can also use these solutions to connect two Virtual Networks in the same Azure Stack or across Azure Stack deployments.

Looking for an Enterprise grade Web Application Firewall (WAF) or application load balancers? Barracuda, Check Point, F5, Palo Alto, and Positive Technologies have offerings you should take a look at.

If you’re looking for DDoS IDS/IPS protection for your application, check out CloudGuard from Check Point, or BIG-IP VE from F5.

Need a Local Traffic Manager or a Global Traffic Manager to balance applications in your Datacenter but can’t use Azure Traffic Manager? Why not take a look at BIG-IP Virtual Edition from F5 that offers rich functionality and policy based centralized management that can balance applications across multiple Azure Stack deployments?

Migration

Azure Stack has ISV solutions for every stage of application migration, from envisioning/discovery to modernization by leveraging PaaS capabilities. Partners including Carbonite, Cloudbase, Commvault, Corent Technologies provide ways to migrate various workloads to both Azure and Azure Stack.

Billing and business insights

Service providers need the ability to track usage of end customers and bill them for their consumption, while you need the ability to control costs incurred when operating Azure Stack. Billing partners like CloudAssert and Exivity can extract Azure Stack usage data and provide consumption-based billing and cost management for both service providers and enterprise customers.

Developer platform and tools

When building modern applications, you can leverage the tools and frameworks that you rely on. The following are some of the current available frameworks and tools in Azure Stack:

Bitnami provides Application and Infrastructure stacks within the Azure Stack Marketplace. Bitnami’s offerings provide Azure users with a catalog of “ready to deploy” apps from Big Data like Cassandra to CMS like WordPress to High-Availability production ready clustered solutions like Node.js.

HashiCorp Terraform enables the necessary workflows for provisioning and managing Azure infrastructure at any scale, in the cloud or on premises. Using an Infrastructure as Code methodology, operators create reusable configuration files which are then used for creating or changing Azure or Azure Stack infrastructure.

Pivotal Cloud Foundry enables a continuous delivery platform for modern applications, allowing customers to deploy and manage their Cloud Foundry infrastructure in Azure Stack.

Red Hat OpenShift Container Platform is available now for Azure Stack, providing hybrid enterprise container orchestration and management.

These are just a few highlights of ISV solutions in our growing Azure Stack Marketplace. There are many first and third party offers in the Azure Stack Marketplace, including operating systems, applications, open source tools, and solution templates. Check back frequently as we add new Marketplace items all the time.

If you are attending Ignite, drop by the Azure Stack booth and we can direct you to any of our partners!

Manage your SQL Information Protection policy in Azure Security Center

$
0
0

We are pleased to share that your SQL Information Protection policy can now be centrally managed for your entire tenant within Azure Security Center. SQL Information Protection is an advanced security capability for discovering, classifying, labeling, and protecting sensitive data in your Azure data resources. With central policy management you can now define a customized classification and labeling policy that will be applied across all databases on your tenant.

SQL Information Protection

SQL Information Protection (SQL IP) consists of an advanced set of capabilities that form a new information protection paradigm in SQL aimed at protecting the data, not just the database. It provides the following abilities:

  • Discovery and recommendations: The classification engine scans your database and identifies columns containing potentially sensitive data. It then provides you an easy way to review and apply the appropriate classification recommendations via the Azure portal.
  • Labeling: Sensitivity classification labels can be persistently tagged on columns using new classification metadata attributes introduced into the SQL engine. This metadata can then be utilized for advanced sensitivity-based auditing and protection scenarios.
  • Monitoring/Auditing: Sensitivity of the query result set is calculated in real time and used for auditing access to sensitive data.
  • Visibility: The database classification state can be viewed in a detailed dashboard in the Azure portal. Additionally, you can download a report, in Excel format, to be used for compliance and auditing purposes, as well as other needs.

The labeling of sensitive data is done using a classification taxonomy, consisting of Labels and Information Types. Labels are the main classification attributes, used to define the sensitivity level of the stored data. Information Types provide additional granularity into the type of data stored in the database column. In addition, string patterns containing special keywords are used to help discover different classes of sensitive data and associate it with the right Information Type and Label.

Customizing your Information Protection policy

You can now customize the Labels and Information Types used by SQL Information Protection. While the system has a built-in classification taxonomy to start out, many customers have requested the ability to set their own values for sensitivity labels and types of data.

The policy is a singular policy for your entire Azure tenant and can be managed in Azure Security Center. Managing your own customized policy enables you to:

  1. Define a fully customized set of sensitivity labels, according to your organizational requirements.
  2. Rank the sensitivity labels in a linear order, signifying a scale of least to most sensitive.
  3. Add customized Information Types to identify sensitive data types specific to your organization's data environment.
  4. Fully customize the association of Information Types to sensitivity Labels, so that each type of data discovered is automatically assigned the right sensitivity classification.
  5. Add a customized set of discovery keywords and string patterns to each Information Type, used by the data discovery engine to automatically identify sensitive data in your databases.
  6. Rank the Information Types in hierarchical order to definitively determine the association when overlapping data types are discovered.

Information Protection Policy

Once your Information Protection policy is fully defined, it will apply to the classification of data on all Azure SQL databases in your tenant.

Get started today!

You can now use Information Protection central policy management to define your organization's Information Protection policy, across all your Azure SQL databases. This gives you the flexibility and control over how sensitive data is discovered in your systems, and enables you to align the sensitivity labels and classification classes to your organizational needs.

Try it out and let us know what you think!

Try Azure CosmosDB for 30 days free, no commitment or sign up

$
0
0

Today we are extending Try Cosmos DB for 30 days free! Try Cosmos DB allows anyone to play with Azure Cosmos DB, with no Azure sign-up required and at no charge, for 30 days with ability to renew unlimited times. As many of you know, Azure Cosmos DB is the first globally distributed, massively scalable, multi-model database service. The service is designed to allow customers to elastically and horizontally scale both throughput and storage across any number of geographical regions. It also offers greater than 10 ms latencies at the 99th percentile, 99.999 percent high availability, and five well defined consistency models to developers to make precise tradeoffs between performance, availability, and consistency of data. Azure Cosmos DB is also the first globally distributed database service in the market today to offer comprehensive Service Level Agreements (SLAs) for throughput, latency, availability, and consistency.

With Try Cosmos DB we want to make it easy for developers to evaluate, build, and test their app, do a hands-on-lab, a tutorial, and create a demo or perform unit testing without incurring any costs or making any commitment. Our goal is to enable any developer to easily experience Azure Cosmos DB and what it has to offer, become more comfortable with our database service, and build the expertise with our stack at zero cost. With Try Cosmos DB for free, you can go from nothing to a fully running planet-scale Azure Cosmos DB app in less than a minute.

Try Azure Cosmos DB now

Try it out for yourself, it takes less than a minute. Or watch this quick video.

image

In seconds, you will have your newly created free Azure Cosmos DB account with an invite to open it in the Azure portal and try out our Quick Starts.

image
Click Open in Azure Portal, which will navigate the browser to the newly created free Azure Cosmos DB account with Quick Starts page open.

image

Follow the Quick Starts to get a running app connected to Azure Cosmos DB in under 30 seconds or proceed exploring the service on your own.

When in the portal, you will be reminded how long you have before your account expires. The trial period is 30 days, but you can extend it for another 24 hours, or click on the link to sign up for a free trial when you are new to Azure or create a new Azure Cosmos DB account if you already have a subscription.

With Try Azure Cosmos DB for free, you can create a container (a collection of documents, a table, or a graph) and globally-distribute it to up to 3 regions, and use any of the capabilities Azure Cosmos DB provides for 30 days. Once the trial expires, you can always come back and create it all over again.

image

Play with Azure Cosmos DB and let us know what you think

Azure Cosmos DB is the database of the future! It is what we believe is the next big thing in the world of massively scalable databases! It makes your data available close to where your users are worldwide. It is a globally distributed, multi-model database service for building planet scale apps with ease using the API and data model of your choice. You never know it until you try!

If you need any help or have questions or feedback, please reach out to us on the developer forums on Stack Overflow. Stay up-to-date on the latest Azure Cosmos DB news and features by following us on Twitter #CosmosDB, @AzureCosmosDB.

New features and enhancements released in Bing Custom Search V3

$
0
0

We are happy to share the release of additions and enhancements to Bing Custom Search. Bing Custom Search is an easy-to-use, ad-free search solution that enables users to build a search experience and query content on their specific site, or across a hand-picked set of websites or domains. To help users surface the results they want, Bing Custom Search provides a simple web interface where users can control ranking specifics and pin, or block, responses to suit their needs.

The Bing Custom Search API gives you powerful ranking, a global-scale search index, and document processing with fast, simple setup. 

Bing Custom Search Hosted UI

At Microsoft, we are committed to making AI accessible to everybody, and we are happy to see that a considerable number of websites around the world are powered by Bing Custom Search. We have been prioritizing new features and enhancements based on the feedback we have received since we announced the general availability of Bing Custom Search v1 in October 2017. In May 2018, we released Bing Custom Search v2 with important new features like custom image search, custom autosuggest, and a means for retrieving key insights regarding the usage of the custom search instance. Today, with Bing Custom Search v3, we are releasing even more. Get details about what’s new in Bing Custom Search v3 by reading the release announcement post on the Bing Developers Blog.

For questions please reach out to us via Stack Overflow and Azure Support. We would also love to hear your feedback.


Announcing private preview of Azure VM Image Builder

$
0
0

Today I am excited to announce the private preview of Azure VM Image Builder, a service which allows users to have an image building pipeline in Azure. Creating standardized virtual machine (VM) images allow organizations to migrate to the cloud and ensure consistency in the deployments. Users commonly want VMs to include predefined security and configuration settings as well as application software they own. However, setting up your own image build pipeline would require infrastructure and setup. With Azure VM Image Builder, you can take an ISO or Azure Marketplace image and start creating your own golden images in a few steps.

How it works

Azure VM Image Builder lets you start with either a Linux-based Azure Marketplace VM or Red Hat Enterprise Linux (RHEL) ISO and begin to add your own customizations. Your customizations can be added in the form of a shell script, and because the VM Image Builder is built on HashiCorp Packer, you can also import your existing Packer shell provisioner scripts. As the last step, you specify where you would like your images hosted, either in the Azure Shared Image Gallery or as an Azure Managed Image. See below for a quick video on how to create a custom image using the VM Image Builder.

part1v2

For the private preview, we are supporting these key features:

  • Migrating an existing image customization pipeline to Azure. Import your existing shell scripts or Packer shell provisioner scripts.
  • Migrating your Red Hat subscription to Azure using Red Hat Cloud Access. Automatically create Red Hat Enterprise Linux VMs with your eligible, unused Red Hat subscriptions.
  • Integration with Azure Shared Image Gallery for image management and distribution
  • Integration with existing CI/CD pipeline. Simplify image customization as an integral part of your application build and release process as shown here:

part2v2

If you are attending Microsoft Ignite, feel free to join us at breakout session BRK3193 to learn more about this service.

Frequently asked questions

Will Azure VM Image Builder support Windows?

For private preview, we will support Azure Marketplace Linux images (specifically Ubuntu 16.04 and 18.04). Support for Windows VM is on our roadmap.

Can I integrate Azure VM Image Builder into my existing image build pipeline?

You can call the VM Image Builder API from your existing tooling.

Is VM Image Builder essentially Packer as a Service?

The VM Image Builder API shares similar style to Packer manifests, and is optimized to support building images for Azure, supporting Packer shell provisioner scripts.

Do you support image lifecycle management in the preview?

For private preview, we will only support creation of images, but not ongoing updates. The ability to update an existing custom image is on our roadmap.

How much does VM Image Builder cost?

For private preview, Azure VM Image Builder is free. Azure Storage used by images are billed at standard pricing rates.

Sign up today for the private preview

I hope you sign up for the private preview and give us feedback. Register and we will begin sending out more information in October.

New Azure HDInsight management SDK now in public preview

$
0
0

Today, we are excited to announce the preview of the new Azure HDInsight Management SDK. This preview SDK brings support for new languages and can be used to easily manage your HDInsight clusters.

Highlights of this preview release

  • More languages: In addition to .NET, we now offer HDInsight SDKs for Python, Java, and Go as well.
  • Manage HDInsight clusters: The SDK provides several useful operations to manage your HDInsight clusters, including the ability to create clusters, delete clusters, scale clusters, list existing clusters, get cluster details, update cluster tags, execute script actions, and more.
  • Monitor HDInsight clusters: Control monitoring on your HDInsight clusters via the Operations Management Suite (OMS). Use the SDK to enable OMS monitoring on a cluster, disable OMS monitoring on a cluster, and view the status of OMS monitoring on a cluster.
  • Script actions: Use the SDK to execute, delete, list, and view details for script actions on your HDInsight clusters.

Get started

You can learn how to quickly get started with the HDInsight Management SDK in the language of your choice here:

Reference documentation

We also provide reference documentation that you can use to learn about all available functions in the HDInsight Management SDK.

Try HDInsight now

We hope you will take full advantage of the HDInsight Management SDK Preview and we are excited to see what you will build with Azure HDInsight. Read this developer guide and follow the quick start guide to learn more about implementing these pipelines and architectures on Azure HDInsight. Stay up-to-date on the latest Azure HDInsight news and features by following us on Twitter #HDInsight and @AzureHDInsight. For questions and feedback, reach out to AskHDInsight@microsoft.com.

About HDInsight

Azure HDInsight is an easy, cost-effective, enterprise-grade service for open source analytics that enables customers to easily run popular open source frameworks including Apache Hadoop, Spark, Kafka, and others. The service is available in 27 public regions and Azure Government Clouds in the US and Germany. Azure HDInsight powers mission-critical applications in a wide variety of sectors and enables a wide range of use cases including ETL, streaming, and interactive querying.

Eight new features in Azure Stream Analytics

$
0
0

This week at Microsoft Ignite 2018, we are excited to announce eight new features in Azure Stream Analytics (ASA). These new features include

    • Support for query extensibility with C# custom code in ASA jobs running on Azure IoT Edge.
    • Custom de-serializers in ASA jobs running on Azure IoT Edge.
    • Live data Testing in Visual Studio.
    • High throughput output to SQL.
    • ML based Anomaly Detection on IoT Edge.
    • Managed Identities for Azure Resources (formerly MSI) based authentication for egress to Azure Data Lake Storage Gen 1.
    • Blob output partitioning by custom date/time formats.
    • User defined custom re-partition count.

The features that are generally available and the ones in public preview will start rolling imminently. For early access to private preview features, please use our sign up form.

Also, if you are attending Microsoft Ignite conference this week, please attend our session BRK3199 to learn more about these features and see several of these in action.

General availability features

Parallel write operations to Azure SQL

Azure Stream Analytics now supports high performance and efficient write operations to Azure SQL DB and Azure SQL Data Warehouse to help customers achieve four to five times higher throughput than what was previously possible. To achieve fully parallel topologies, ASA will transition SQL writes from serial to parallel operations while simultaneously allowing for batch size customizations. Read Understand outputs from Azure Stream Analytics for more details.

Parallel write operations to Azure SQL

Configuring hi-throughput write operation to SQL

Public previews

Query extensibility with C# UDF on Azure IoT Edge

Azure Stream Analytics offers a SQL-like query language for performing transformations and computations over streams of events. Though there are many powerful built-in functions in the currently supported SQL language, there are instances where a SQL-like language doesn't provide enough flexibility or tooling to tackle complex scenarios.

Developers creating Stream Analytics modules for Azure IoT Edge can now write custom C# functions and invoke them right in the query through User Defined Functions.  This enables scenarios like complex math calculations, importing custom ML models using ML.NET and programming custom data imputation logic. Full fidelity authoring experience is made available in Visual Studio for these functions. You can install the latest version of Azure Stream Analytics tools for Visual Studio.

Find more details about this feature in our documentation.

image

Definition of the C# UDF in Visual Studio

image

Calling the C# UDF from ASA Query

Output partitioning to Azure Blob Storage by custom date and time formats

Azure Stream Analytics users can now partition output to Azure Blob storage based on custom date and time formats.
This feature greatly improves downstream data-processing workflows by allowing fine-grained control over the blob output especially for scenarios such as dashboarding and reporting. In addition, partition by custom date and time formats enables stronger alignment with downstream Hive supported formats and conventions when consumed by services such as Azure HDInsight or Azure Databricks. Read Understand outputs from Azure Stream Analytics for more details.

custom date Output partitioning to Azure Blob Storage by custom date and time formatstime

Partition by custom date or time on Azure portal

Ability to partition output to Azure Blob storage by custom field or attribute continues to be in private preview.

image

Setting partition by custom attribute on Azure portal

Live data testing in Visual Studio

Available immediately, Visual Studio tooling for Azure Stream Analytics further enhances the local testing capability to help users test their queries against live data or event streams from cloud sources such as Azure Event Hubs or IoT hub. This includes full support for Stream Analytics time policies in local simulated Visual Studio IDE environment.

This significantly shortens development cycles as developers no longer need to start/stop their job to run test cycles. Also, this feature provides a fluent experience for checking the live output data while the query is running. You can install the latest version of Azure Stream Analytics tools for Visual Studio.

Live data testing in Visual Studio

Live Data Testing in Visual Studio IDE

User defined custom re-partition count

We are extending our SQL language to optionally enable users to specify the number of partitions of a stream when performing repartitioning. This will enable better performance tuning for scenarios where the partition key can’t be changed to upstream constraints, or when we have fixed number of partitions for output, or partitioned processing is needed to scale out to larger processing load. Once repartitioned, each partition is processed independently of others.

With this new language feature, query developer can simply use a newly introduced keyword INTO after PARTITION BY statement. For example, the query below reads from the input stream (regardless of it being naturally partitioned) and repartition the stream into 10 based on the DeviceID dimension and flush the data to output.

SELECT * INTO [output] FROM [input] PARTITION BY DeviceID INTO 10


Private previews – Sign up for previews


Built-in models for Anomaly Detection on Azure IoT Edge and cloud

By providing ready-to-use ML models right within our SQL-like language, we empower every developer to easily add Anomaly Detection capabilities to their ASA jobs, without requiring them to develop and train their own ML models. This in effect reduces the whole complexity associated with building ML models to a simple single function call.

Currently, this feature is available for private preview in cloud, and we are happy to announce that these ML functions for built-in Anomaly Detection are also being made available for ASA modules running on Azure IoT Edge runtime. This will help customers who demand sub-second latencies, or within scenarios where connectivity to cloud is unreliable or expensive.

In this latest round of enhancements, we have been able to reduce the number of functions from five to two while still detecting all five kinds of anomalies of Spike, Dip, Slow positive increase, Slow negative decrease, and Bi-level changes. Also, our tests are showing a remarkable five to ten times improvement in performance.

Sedgwick is a global provider of technology enabled risk, benefits and integrated business solutions who has been engaged with us as an early adopter for this feature.

“Sedgwick has been working directly with Stream Analytics engineering team to explore and operationalize compelling scenarios for Anomaly Detection using built-in functions in the Stream Analytics Query language. We are convinced this feature holds a lot of potential for our current and future scenarios”.

– Krishna Nagalapadi, Software Architect, Sedgwick Labs.

Custom de-serializers in Stream Analytics module on Azure IoT Edge

Today, Azure Stream Analytics supports input events in JSON, CSV or AVRO data formats out of the box. However, millions of IoT devices are often optimized to generate data in other formats to encode structured data in a more efficient yet extensible format.

Going forward, IoT devices sending data in any format can leverage the power of Azure Stream Analytics. Be it Parquet, Protobuf, XML or any binary format. Developers can now implement custom de-serializers in C# which can then be used to de-serialize events received by Azure Stream Analytics.

Custom de-serializers in Stream Analytics module on Azure IoT Edge

Configuring input with a custom serialization format

Managed identities for Azure resources (formerly MSI) based authentication for egress to Azure Data Lake Storage Gen 1

Users of Azure Stream Analytics will now be able to operationalize their real-time pipelines with MSI based authentication while writing to Azure Data Lake Storage Gen 1.

Previously, users depended on Azure Active Directory based authentication for this purpose, which had several limitations.  For instance, users will now be able to automate their Stream Analytics pipelines through PowerShell. Secondly, this allows users to have long running jobs without being interrupted for sign-in renewals periodically. Finally, this makes user experience consistent across almost all ingress and egress services that are integrated out-of-the-box with Stream Analytics.

Managed identities for Azure resources (formerly MSI) - 2       Managed identities for Azure resources (formerly MSI) - 1

Configuring MSI based authentication to Data Lake Storage

Feedback

Engage with us, and get preview of new features by following our twitter handle @AzureStreaming.

Azure Stream Analytics team is highly committed to listening to your feedback and letting the user voice dictate our future investments. Please join the conversation and make your voice heard via our UserVoice.

Announcing the EA preview release of management group cost APIs

$
0
0

We are excited to preview a set of Azure Resource Manager Application Program Interfaces (ARM APIs) to view cost and usage information in the context of a management group for Enterprise Customers. Azure customers can utilize management groups today to place subscriptions into containers for organization within a defined business hierarchy. This allows administrators to manage access, policies, and compliance over those subscriptions. These APIs expand your cost analysis capabilities by offering a new lens through which you can attribute cost and usage within your organization.

Calling the APIs

The APIs for management group usage and cost are documented in the Azure rest docs and support the following functions:

Operations supported

  1. List usage details by management group for native Azure resources
  2. Get the aggregate cost of a management group

Preview limitations

The preview release of the management group cost and usage APIs has several limitations, listed below:

  1. Cost and usage data by management group will only be returned if the management group is comprised of exclusively Enterprise Agreement subscriptions. Cost views for a management group are not supported if the group contains Web Direct, Pay-As-You-Go or Cloud Service Provider subscriptions. This functionality will be offered in a future release.
  2. Cost and usage data for a management group is a point in time snapshot of the current management group hierarchy. The cost and usage data returned will not take into account any past changes or reorganization within the management group hierarchy.
  3. Cost and usage data for a management group will only be returned if the underlying charges and data are billed in a single currency. Support for multiple currencies will be available in a future release.

Additional resources

Strengthen your security posture and protect against threats with Azure Security Center

$
0
0

In my recent conversations with customers, they have shared the security challenges they are facing on-premises. These challenges include recruiting and retaining security experts, quickly responding to an increasing number of threats, and ensuring that their security policies are meeting their compliance requirements.

Moving to the cloud can help solve these challenges. Microsoft Azure provides a highly secure foundation for you to host your infrastructure and applications while also providing you with built-in security services and unique intelligence to help you quickly protect your workloads and stay ahead of threats. Microsoft’s breadth of security tools range span across identity, networking, data, and IoT and can even help you protect against threats and manage your security posture. One of our integrated, first-party services is Azure Security Center.

Security Center is built into the Azure platform, making it easy for you start protecting your workloads at scale in just a few steps. Our agent-based approach allows Security Center to continuously monitor and assess your security state across Azure, other clouds and on-premises. It’s helped customers like Icertis or Stanley Healthcare strengthen and simplify their security monitoring. Security Center gives you instant insight into issues and the flexibility to solve these challenges with integrated first-party or third-party solutions. In just a few clicks, you can have peace of mind knowing Security Center is enabled to help you reduce the complexity involved in security management.

Today we are announcing several capabilities that will help you strengthen your security posture and protect against threats across hybrid environments.

Strengthen your security posture

Improve your overall security with Secure Score: Secure Score gives you visibility into your organizational security posture. Secure Score prioritizes all of your recommendations across subscriptions and management groups guiding you which vulnerabilities to address first. When you quickly remediate the most pressing issues first, you can see how your actions greatly improve your Secure Score and thus your security posture.

Secure Score

Interact with a new network topology map: Security Center now gives you visibility into the security state of your virtual networks, subnets and nodes through a new network topology map. As you review the components of your network, you can see recommendations to help you quickly respond to detected issues in your network. Also, Security Center continuously analyzes the network security group rules in the workload and presents a graph that contains the possible reachability of every VM in that workload on top of the network topology map.

Network map

Define security policies at an organizational level to meet compliance requirements: You can set security policies at an organizational level to ensure all your subscriptions are meeting your compliance requirements. To make things even simpler, you can also set security policies for management groups within your organization. To easily understand if your security policies are meeting your compliance requirements, you can quickly view an organizational compliance score as well as scores for individual subscriptions and management groups and then take action.

Monitor and report on regulatory compliance using the new regulatory compliance dashboard: The Security Center regulatory compliance dashboard helps you monitor the compliance of your cloud environment. It provides you with recommendations to help you meet compliance standards such as CIS, PCI, SOC and ISO.

Regulatory Compliance

Customize policies to protect information in Azure data resources: You can now customize and set an information policy to help you discover, classify, label and protect sensitive data in your Azure data resources. Protecting data can help your enterprise meet compliance and privacy requirements as well as control who has access to highly sensitive information. To learn more on data security, visit our documentation.

Assess the security of containers and Docker hosts: You can gain visibility into the security state of your containers running on Linux virtual machines. Specifically, you can gain insight into the virtual machines running Docker as well as the security assessments that are based on the CIS for Docker benchmark.

Protect against evolving threats

Integration with Windows Defender Advanced Threat Protection servers (WDATP): Security Center can detect a wide variety of threats targeting your infrastructure. With the integration of WDATP, you now get endpoint threat detection (i.e. Server EDR) for your Windows Servers as part of Security Center. Microsoft’s vast threat intelligence enables WDATP to identify and notify you of attackers’ tools and techniques, so you can understand threats and respond. To uncover more information about a breach, you can explore the details in the interactive Investigation Path within Security Center blade. To get started, WDATP is automatically enabled for Azure and on-premises Windows Servers that have onboarded to Security Center.

Threat detection for Linux: Security Center’s advanced threat detection capabilities are available across a wide variety of Linux distros to help ensure that whatever operation system your workloads are running on or wherever your workloads are running, you gain the insights you need to respond to threats quickly. Capabilities include being able to detect suspicious processes, dubious login attempts, and kernel module tampering.

Adaptive network controls: One of the biggest attack surfaces for workloads running in the public cloud are connections to and from the public internet. Security Center can now learn the network connectivity patterns of your Azure workload and provide you with a set of recommendations for your network security groups on how to better configure your network access policies and limit your exposure to attack. These recommendations also use Microsoft’s extensive threat intelligence reports to make sure that known bad actors are not recommended.

Threat Detection for Azure Storage blobs and Azure Postgre SQL: In addition to being able to detect threats targeting your virtual machines, Security Center can detect threats targeting data in Azure Storage accounts and Azure PostgreSQL servers. This will help you respond to unusual attempts to access or exploit data and quickly investigate the problem.

Security Center can also detect threats targeting Azure App Services and provide recommendations to protect your applications.

Fileless Attack Detection: Security Center uses a variety of advanced memory forensic techniques to identify malware that persists only in memory and is not detected through traditional means. You can use the rich set of contextual information for alert triage, correlation, analysis and pattern extraction.

Adaptive application controls: Adaptive applications controls helps you audit and block unwanted applications from running on your virtual machines. To help you respond to suspicious behavior detected with your applications or deviation from the policies you set, it will now generate an alert in the Security alerts if there is a violation of your whitelisting policies. You can now also enable adaptive application controls for groups of virtual machines that fall under the “Not recommend” category to ensure that you whitelist all applications running on your Windows virtual machines in Azure.

Just-in-Time VM Access: With Just-in-Time VM Access, you can limit your exposure to brute force attacks by locking down management ports, so they are only open for a limited time. You can set rules for how users can connect to these ports, and when someone needs to request access. You can now ensure that the rules you set for Just-in-Time VM access will not interfere with any existing configurations you have already set for your network security group.

File Integrity Monitoring (FIM): To help protect your operation system and application software from attack, Security Center is continuously monitoring the behavior of your Windows files, Windows registry and Linux files. For Windows files, you can now detect changes through recursion, wildcards, and environment variables. If some abnormal change to the files or a malicious behavior is detected, Security Center will alert you so that you can continue to stay in control of your files.

Start using Azure Security Center’s new capabilities today

The following capabilities are generally available: Enterprise-wide security policies, Adaptive application controls, Just-in-Time VM Access for a specific role, adjusting network security group rules in Just-in-Time VM Access, File Integrity Monitoring (FIM), threat detection for Linux, detecting threats on Azure App Services, Fileless Attack Detection, alert confidence score, and integration with Windows Defender Advanced Threat Protection (ATP).

These features are available in public preview: Security state of containers, network visibility map, information protection for Azure SQL, threat detection for Azure Storage blobs and Azure Postgre SQL and Secure Score.

We are offering a limited public preview for some capabilities like our compliance dashboard and adaptive network controls. Please contact us to participate in this early preview.

Learn more about Azure Security Center

If you are attending Ignite 2018 in Orlando this week, we would love to connect with you at our Azure security booth. You can also attend session on Azure Security Center on Wednesday, September 26th from 2:15-3pm EST. We look forward to seeing you!

To learn more about how you can implement these Security Center capabilities, visit our documentation.

Step Back – Going Back in C++ Time

$
0
0

Step Back for C++

In the most recent, 15.9, update to Visual Studio 2017 Enterprise Edition, we’ve added “Step Back” for C++ developers targeting Windows 10 Anniversary Update (1607) and later. With this feature, you can now return to a previous state while debugging without having to restart the entire process. It’s installed as part of the C++ workload but set to “off” by default. To enable it, go to Tools -> Options -> IntelliTrace and select the “IntelliTrace snapshots” option. This will enable snapshots for both Managed and Native code.

Once “Step Back” is enabled, you will see snapshots appear in the Events tab of the Diagnostic Tools Window when you are stepping through C++ code.

Clicking on any event will take you back to its respective snapshot – which is a much more productive way to go back in time if you want to go further back than a few steps. Or, you can simply use the Step Backward button on the debug command bar to go back in time. You can see “Step Back” in action in concurrence with “Step Over” in the gif below.

Under the Hood

So far, we’ve talked about “Step Back” and how you can enable and use it in Visual Studio, but you could have read that on the VS blog. Here on the VC++ blog, I thought it would be interesting to explain how the feature works and what the trade offs are. After all, no software, debuggers included, are magical!

At the core of “Step Back” is an API in Windows called PssCaptureSnapshot (docs). While the API isn’t very descriptive, there are two key things that it does to a process. Given a target process it will:

  1. Create a ‘snapshot’ which looks suspiciously like the child process of an existing process that has no threads running.
  2. Mark the processes memory, it’s page tables (Wikipedia), as copy-on-write (Wikipedia). That means that whenever a table is written to, the table is copied.

The important thing about the above is that between the two you basically get a copy of the entire virtual memory space used by the process that you snapshotted. From inside that process you can then inspect the state, the memory, of the application as it was at the time the snapshot was created. This is handy for the feature which the API was originally designed for; the serialization of a process at the point of failure .

In VS when debugging C++, we take these snapshots on certain debugger events, namely:

  1. When a breakpoint is hit
  2. When a step event occurs – but only if the time between the stepping action and the previous stepping action is above a certain threshold (around ~300ms). This helps with the case where you are hammering the stepping buttons and just want to step at all possible speed.

From a practical perspective, that means there will be a snapshot as you step through code. We keep a First in First Out buffer of snapshots, freeing them up as more are taken. One of the downsides of this approach, is that we aren’t taking snapshots as your app is running so you can’t hit a breakpoint and then go back to see what happened before the breakpoint was hit.

Now there is a copy of the process, a snapshot, but how does that get debugged in VS?

Well, this is the ‘easy’ bit, basically when you hit “Step Back” or activate a snapshot from the Diagnostic Tools window, VS goes ahead and attaches the debugger to that process. We hide this from you in the VS UI, so it still looks like you are debugging the process you started with, but in reality, you are debugging the snapshot process with all the state from the past. Once you go back to ‘live’ debugging you will back to the main process which is still paused at the location you left it.

Performance of Step Back

One of the first considerations of adding any new feature to the debugger is on how it might impact the performance of VS while debugging. While improving performance of VS is something of a Sisyphean task and as many improvements as we make there is more to be made as well as additional features that take some of those wins. Taking a snapshot takes time, everything does, in this case it takes time both in the process being debugged and back in VS. There’s no sure way to predict how long it will take as it’s dependent on the app and how it’s using memory, but while we don’t have a magic 8 ball, we do have data, lots of it…

As of the time of writing, from testing and dogfooding usage in the last 28 days use we’ve seen 29,635,121 snapshots taken across 14,738 machines. From that data set we can see that the 75th percentile for how long it took to take a snapshot is 81ms. You can see a more detailed breakdown in the graph below.


In any case, if you were wondering why “Step Back” isn’t on by default, that graph above is why, “Step Back” simply impacts stepping performance too much to be on by default, all the time, for everyone. Instead, it’s a tool that you should decide to use and, by and large, you’ll likely never notice the impact. Though, if you did we will turn off “Step Back” and show a ‘gold bar’ notification that we’ve done so. The ‘gold bars’ are the notifications that pop at the top of the editor, the picture below shows the one for when you try “Step Back” without snapshots being enabled.
That’s the CPU usage aspect of performance out the way, now to look at the second aspect, memory.

As you continue to debug your app and the app continues execution it will no doubt write to memory. This could be to set a value from 1 to 2 as in the example above. Or it could be something more complex, but in any case, when it comes time to write that change the OS is going to copy the associated page table to a new location. Duplicating the data that was changed, and potential other data, at the new location, while keeping the old. That new location will continue to be used. That means that the old location still has the old value, 1, from the time the snapshot was taken, and the new location has the value of 2. As Windows is now copying memory as it’s written, the app will consume more memory. How much though depends on the application and what it’s doing. The consumption of memory is directly proportional to how volatile it is. For example, in the trivial app above each step is going to consume a tiny bit more data. But, if the app instead were encoding an image or doing something intense a lot more memory would get consumed than would be typical.

Now as memory, even virtual, is finite this poses some limitations on step back. Namely that we can’t keep an infinite number of snapshots around. At some point we have to free them and their associated memory up. We do that in two ways; firstly, the snapshots are let go on a First in First out basis once a limit of a 100 has been reached. That is, you can never step back more than a 100x. That cap is arbitrary though, a magic number. There’s an additional cap that’s enforced and based on heuristics, essentially VS is watching memory usage and in the event of low memory snapshots get dropped starting with the oldest – just as if a 100 was hit.

Conclusion

We’ve covered how you can use “Step Back” and how it works under the hood and hopefully you are now in a place to make an informed decision on when to use the feature. While this feature is only in the Enterprise versions of Visual Studio you can always try out the preview channel of Visual Studio Enterprise. I highly recommend you go turn it on, for me, personally it’s saved a whole bunch of time not restarting a debug session. And when you do use the feature I’d love to hear your feedback, and as ever if you have any feedback on the debugger experience in VS let us know!

You can also reach me by mail at andster@microsoft.com)or on twitter at https://twitter.com/andysterland.

Thanks for reading!

Andy Sterland

Program Manager, Visual Studio, Diagnostics Team

Getting started writing Visual Studio extensions

$
0
0

I’m often asked how to best learn to build Visual Studio extensions, so here is what I wished someone told me before I got started.

Don’t skip the introduction

It’s easy to create a new extensibility project in Visual Studio, but unless you understand the basics of how the extensibility system works, then you are setting yourself up for failure.

The best introduction I know of is a session from //build 2016 and it is as relevant today as it was then.



Know the resources

Where do you get more information about the various aspects of the Visual Studio APIs you wish to use? Here are some very helpful websites that are good to study.

Know how to search for help

Writing extensions is a bit of a niche activity so searching for help online doesn’t always return relevant results. However, there are ways we can optimize our search terms to generate better results.

  • Use the precise interface and class names as part of the search term
  • Try adding the words VSIX, VSSDK or Visual Studio to the search terms
  • Search directly on GitHub instead of Google/Bing when possible
  • Ask questions to other extenders on the Gitter.im chatroom

Use open source as a learning tool

You probably have ideas about what you want your extension to do and how it should work. But what APIs should you use and how do you hook it all up correctly? These are difficult questions and a lot of people give up when these go unanswered.

The best way I know of is to find extensions on the Marketplace that does similar things or uses similar elements as to what you want to do. Then find the source code for that extension and look at what they did and what APIs they used and go from there.

Additional tools

There is an open source extension for Visual Studio that provides additional features for extension authors that I can highly recommend. Grab the Extensibility Essentials extension on the Marketplace.

Also, a NuGet package exist containing Roslyn Analyzers that will help you writing extensions. Add the Microsoft.VisualStudio.SDK.Analyzers package to your extension project.

I hope this will give you a better starting point for writing extensions. If I forgot to mention something, please let me know in the comments.

Mads Kristensen, Senior Program Manager
@mkristensenMads Kristensen is a senior program manager on the Visual Studio Extensibility team. He is passionate about extension authoring, and over the years, he’s written some of the most popular ones with millions of downloads.

How to upgrade extensions to support Visual Studio 2019

$
0
0

Recently, I’ve updated over 30 of my extensions to support Visual Studio 2019 (16.0). To make sure they work, I got my hands on a very early internal build of VS 2019 to test with (working on the Visual Studio team has its benefits). This upgrade process is one of the easiest I’ve ever experienced.

I wanted to share my steps with you to show just how easy it is so you’ll know what to do once Visual Studio 2019 is released.

Updates to .vsixmanifest

We need to make a couple of updates to the .vsixmanifest file. First, we must update the supported VS version range.

<InstallationTarget>

Here’s a version that support every major and minor versions of Visual Studio 14.0 (2015) and 15.0 (2017) all the way up to but not including version 16.0.

<Installation InstalledByMsi="false"> 
    <InstallationTarget Id="Microsoft.VisualStudio.Pro" Version="[14.0,16.0)" /> 
</Installation>

Simply change the upper bound of the version range from 16.0 to 17.0, like so:

<Installation InstalledByMsi="false"> 
    <InstallationTarget Id="Microsoft.VisualStudio.Pro" Version="[14.0,17.0)" /> 
</Installation>
<Prerequisite>

Next, update the version ranges in the <Prerequisite> elements. Here’s what it looked like before:

<Prerequisites> 
    <Prerequisite Id="Microsoft.VisualStudio.Component.CoreEditor" Version="[15.0,16.0)" DisplayName="Visual Studio core editor" /> 
</Prerequisites>

We must update the version ranges to have the same upper bound as before, but in this case we can make the upper bound open ended, like so:

<Prerequisites> 
    <Prerequisite Id="Microsoft.VisualStudio.Component.CoreEditor" Version="[15.0,)" DisplayName="Visual Studio core editor" /> 
</Prerequisites>

This means that the Prerequisite needs version 15.0 or newer.

See the updated .vsixmanifest files for Markdown Editor, Bundler & Minifier, and Image Optimizer.

Next Steps

Nothing. That’s it. You’re done.

Well, there is one thing that may affect your extension. Extensions that autoload a package has to do so in the background as stated in the blog post, Improving the responsiveness of critical scenarios by updating auto load behavior for extensions. You can also check out this walkthrough on how to update your extension to use the AsyncPackage if you haven’t already.

What about the references to Microsoft.VisualStudio.Shell and other such assemblies? As always with new version of Visual Studio, they are automatically being redirected to the 16.0 equivalent and there is backwards compatibility to ensure it will Just WorkTM.  And in my experience with the upgrade is that they in fact do just work.

I’m going to head back to adding VS 2019 support to the rest of my extensions. I’ve got about 40 left to go.

Mads Kristensen, Senior Program Manager
@mkristensenMads Kristensen is a senior program manager on the Visual Studio Extensibility team. He is passionate about extension authoring, and over the years, he’s written some of the most popular ones with millions of downloads.

Microsoft Tops Four AI Leaderboards Simultaneously

$
0
0

Businesses are continuously trying to be more productive, while also better serving their customers and partners, and AI is one of the new technology areas they are looking at to help with this work. My colleague Alysa Taylor recently shared some of the work we are doing to help in this area with  our upcoming Dynamics 365 + AI offerings, a new class of business applications that will deliver AI-powered insights out of the box. These solutions help customers to make the transition from business intelligence (BI) to artificial intelligence (AI) to help address increasingly complex scenarios and derive actionable insights. Many of these new capabilities that will be shipping this October, are powered by breakthrough innovations from our world-class AI research & development teams who contribute to our products and participate in the broader research community by publishing their results.

This time last year, I wrote about the Stanford Question Answer Dataset (SQuAD) test for machine reading comprehension (MRC). Since writing that post, Microsoft reached another major milestone, creating a system that could read a document and answer as well as a human in the SQuAD 1.1 test. Although this is a test which is different than real world usage we find that the research innovations make our products better for our customers. Today I’d like to share an update on the next wave of innovation in natural language understanding and machine reading comprehension. Microsoft’s AI research and business engineering teams have now taken the top positions in three important industry competitions hosted by Salesforce, the Allen Institute for AI, and Stanford University. Even though the top spots on the leaderboards continuously change I would like to highlight some of our recent progress.

Microsoft tops Salesforce WikiSQL Challenge

Data stored in relational databases is the “fuel” that sales and marketing professionals tap to inform daily decisions. However, understanding how to get value from the data often requires deeply understanding the structure of the data. An easier approach is to use a natural language interface to query the data. Salesforce published a large crowd-sourced dataset based on Wikipedia, called WikiSQL, for developing and testing such interfaces. Over the last year many research teams have been developing techniques using this dataset and Salesforce has maintained a leaderboard for this purpose. Earlier this month, Microsoft took the top position on Salesforce’s leaderboard with a new approach called IncSQL. The significant improvement (from 81.4% to 87.1%) in test execution is due to a fundamentally novel incremental parsing approach combined with the idea of execution guided decoding detailed in the linked academic articles above. This work is the result of collaboration between scientists in Microsoft Research and in the Business Application Group.

WikiSQL Leaderboard

Microsoft tops Allen Institute for AI’s Reasoning Challenge (ARC)

The ARC question answering challenge, provides a dataset of 7,787 grade-school level, multiple-choice open domain questions designed to test approaches in question answering. Open domain is a more challenging approach for text understanding since the answer is not explicitly present. Models must first retrieve related evidence from large corpora before selecting the choice. This is a more realistic setting for a general-purpose application in this space. The top approach, essential term aware – retriever reader (ET-RR) was developed jointly by our Dynamics 365 + AI research team working with interns from the University of San Diego. The #3 position on the leaderboard is a separate research team comprised of Sun Yat-Sen University researchers and Microsoft Research Asia. Both results serve as a great reminder of the value of collaboration between academia and industry to solve real-world problems.

AI2 Reasoning Challenge

Microsoft tops new Stanford SQuAD 2.0 Reading Comprehension

In June 2018, SQuAD version 2.0 was released to “encourage the development of reading comprehension systems that know what they don’t know.”  Microsoft currently occupies the #1 position on SQuAD 2.0 and three of the top five rankings overall on, while simultaneously maintaining the #1 position on SQuAD 1.1. What’s exciting is how multiple positions are occupied by the Microsoft business applications group responsible for Dynamics 365 + AI demonstrating the benefits of infusing AI researchers in our engineering groups.

SQuAD 2.0 Leaderboard

These results show the breadth of MRC challenges our teams are researching and the rapid pace of innovation and collaboration in the industry. Combining researchers with engineering to tackle product challenges while participating in industry research challenges is shaping up to be a beneficial way to advance AI research and bring AI based solutions to customers more quickly.

Cheers,

Guggs

3-D shadow maps in R: the rayshader package

$
0
0

Data scientists often work with geographic data that needs to be visualized on a map, and sometimes the maps themselves are the data. The data is often located in two-dimensional space (latitude and longitude), but for some applications we have a third dimension as well: elevation. We could represent the elevations using contours, color, or 3-D perspective, but with the new rayshader package for R by Tyler Morgan-Wall, it's easy to visualize such maps as 3-D relief maps complete with shadows, perspective and depth of field:

👌 Dead-simple 3D surface plotting in the next version of rayshader! Apply your hillshade (or any image) to a 3D surface map. Video preview with rayshader's built-in palettes. #rstats

Code:

elmat %>%
sphere_shade() %>%
add_shadow(ray_shade(elmat)) %>%
plot_3d(elmat) pic.twitter.com/FCKQ9OSKpj

— Tyler Morgan-Wall (@tylermorganwall) July 2, 2018

Tyler describes the rayshader package in a gorgeous blog post: his goal was to generate 3-D representations of landscape data that "looked like a paper weight". (Incidentally, you can use this package to produce actual paper weights with 3-D printing.) To this end, he went beyond simply visualizing a 3-D surface in rgl and added a rectangular "base" to the surface as well as shadows cast by the geographic features. He also added support for detecting (or specifying) a water level: useful for representing lakes or oceans (like the map of the Monterey submarine canyon shown below) and for visualizing the effect of changing water levels like this animation of draining Lake Mead.

Raytracer

The rayshader package is implemented as an independent R package; it doesn't require any external 3-D graphics software to work. Not only does that make it easy to install and use, but it also means that the underlying computations are available for specialized data analysis tasks. For example, research analyst David Waldran used a LIDAR scan of downtown Indianapolis to create (with the lidR package) a 3-D map of the buildings, and then used the ray_shade function to simulate the shadows cast by the buildings at various times during a winter's day. Averaging those shadows yields this map of the shadiest winter spots in Indianapolis:

Indianapolis Winter

The rayshader package is available for download now from your local CRAN mirror. You can also find an overview, and the latest version of the package at the Github repository linked below.

Github (tylermorganwall): rayshader, R Package for Producing and Visualizing Hillshaded Maps from Elevation Matrices, in 2D and 3D

 

Healthcare Cloud Security Stack now available on Azure Marketplace

$
0
0

The success of healthcare organizations today is dependent on data-driven decision making. Inability to quickly access and process patient data due to outdated infrastructure may result in life or death situations. Healthcare organizations are making the shift to the cloud to enable better health outcomes. A critical part of that process is ensuring security and vulnerability management.

The Healthcare Cloud Security Stack for Microsoft Azure addresses these critical needs, helping entities use cloud services without losing focus on cybersecurity and HIPAA compliance. Healthcare Cloud Security Stack offers a continuous view of vulnerabilities and a complete security suite for cloud and hybrid workloads.

What is Healthcare Cloud Security Stack?

Healthcare Cloud Security Stack which is now available on Azure Marketplace, uses Qualys Vulnerability Management and Cloud Agents, Trend Micro Deep Security, and XentIT Executive Dashboard as a unified cloud threat management solution. Qualys cloud agents continuously collect vulnerability information and are mapped to Trend Micro Deep Security (TMDS) IPS.

Slide1

In the event that Qualys assesses a vulnerability and Deep Security has a virtual patch available, the Trend Micro Deep Security virtual patching engages until a physical patch is available and deployed. XentIT’s Executive Dashboard provides a single pane of glass into the vulnerabilities identified by Qualys. XentIT’S Executive Dashboard also provide the number and types of threats blocked by Trend Micro, as well as actionable intelligence for further investigation and remediation by security analysts and engineers.

The Healthcare Cloud Security Stack unified solution eliminates the overhead of security automation and orchestration after migration to the cloud resulting in:

  • Modernization of IT infrastructure while maintaining focus on cybersecurity and HIPAA compliance.
  • Gaining of actionable insights to proactively manage vulnerabilities.
  • Simplification of security management to free up resources for other priorities.

Learn more about Healthcare Cloud Security Stack on the Azure Marketplace, and look for more integrated solutions.

Spark Debugging and Diagnosis Toolset for Azure HDInsight

$
0
0

Debugging and diagnosing large, distributed big data sets is a hard and time-consuming process. Debugging big data queries and pipelines has become more critical for enterprises and includes debugging across many executors, fixing complex data flow issues, diagnosing data patterns, and debugging problems with cluster resources. The lack of enterprise-ready Spark job management capabilities constrains the ability of enterprise developers to collaboratively troubleshoot, diagnose and optimize the performance of workflows.

Microsoft is now bringing its decade-long experience of running and debugging millions of big data jobs to the open source world of Apache Spark. Today, we are delighted to announce the public preview of the Spark Diagnosis Toolset for HDInsight for clusters running Spark 2.3 and up. We are adding a set of diagnosis features to the default Spark history server user experience in addition to our previously released Job Graph and Data tabs. The new diagnosis features assist you in identifying low parallelization, detecting and running data skew analysis, gaining insights on stage data distribution, and viewing executor allocation and usage.

Data and time skew detection and analysis

Development productivity is the key for making enterprises technology teams successful. The Azure HDInsight developer toolset brings industry-leading development practices to big data developers working with Spark. Job Skew Analysis identifies data and time skews by analyzing and comparing data input and execution time across executors and tasks through built-in rules and user-defined rules. It increases productivity by automatically detecting skews, summarizing the diagnosis results, and displaying the task distribution between normal and skewed tasks.



image   

Executor Usage Analysis

Enterprises have to manage cost while maximizing performance of their production Spark jobs, especially given the rapidly increasing amount of data that needs to be analyzed. The Executor Usage Analysis tool visualizes the Spark job executors’ allocation and utilization. The chart displays the dynamic change of allocated executors, running executors and idle executors along with the job execution time. The executor usage chart serves as an easy to use reference for you to understand Spark job resource usage and so you can update configurations and optimize for performance or cost.

image

Getting started with Spark Debugging and Diagnosis Toolset

These features have been built into the HDInsight Spark history server.

  • Access from the Azure portal. Open the Spark cluster, click Cluster Dashboard from Quick Links, and then click Spark History Server.
  • Access by URL, open the Spark History Server.

Feedback

We look forward to your comments and feedback. If you have any feature requests, asks, or suggestions, please send us a note at hdivstool@microsoft.com. For bug submissions, please open a new ticket using the template.

For more information, check out the following:

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>