Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

You can now download the new Open Source Windows Terminal

$
0
0

Last month Microsoft announced a new open source Windows Terminal! It's up at https://github.com/microsoft/Terminal and it's great, but for the last several weeks you've had to build it yourself as a Developer. It's been very v0.1 if you know what I mean.

Today you can download the Windows Terminal from the Microsoft Store! This is a preview release (think v0.2) but it'll automatically update, often, from the Windows Store if you have Windows 10 version 18362.0 or higher. Run "winver" to make sure.

Windows Terminal

If you don't see any tabs, hit Ctrl-T and note the + and the pull down menu at the top there. Under the menu go to Settings to open profiles.json. Here's mine on one machine.

Here's some Hot Windows Terminal Tips

You can do background images, even animated, with opacity (with useAcrylic off):

"backgroundImage": "c:/users/scott/desktop/doug.gif",

"backgroundImageOpacity": 0.7,
"backgroundImageStretchMode": "uniformToFill

You can edit the key bindings to your taste in the "key bindings" section. For now, be specific, so the * might be expressed as Ctrl+Shift+8, for example.

Try other things like cursor shape and color, history size, as well as different fonts for each tab.

 "cursorShape": "vintage"

If you're using WSL or WSL2, use the distro name like this in your new profile:

"wsl.exe -d Ubuntu-18.04"

If you like Font Ligatures or use Powerline, consider Fira Code as a potential new font.

I'd recommend you PIN terminal to your taskbar and start menu, but you can run windows terminal from the command "wt" from Windows R or from anotherc console. That's just "wt" and enter!

Try not just "Ctrl+Mouse Scroll" but also "Ctrl+Shift+Mouse Scroll" and get your your whole life!

Remember that the definition of a shell is someone fluid, so check out Azure Cloud Shell, in your terminal!

Windows Terminal menus

Also, let's start sharing nice color profiles! Share your new ones as a Gist in this format. Note the name.

{

"background" : "#2C001E",
"black" : "#4E9A06",
"blue" : "#3465A4",
"brightBlack" : "#555753",
"brightBlue" : "#729FCF",
"brightCyan" : "#34E2E2",
"brightGreen" : "#8AE234",
"brightPurple" : "#AD7FA8",
"brightRed" : "#EF2929",
"brightWhite" : "#EEEEEE",
"brightYellow" : "#FCE94F",
"cyan" : "#06989A",
"foreground" : "#EEEEEE",
"green" : "#300A24",
"name" : "UbuntuLegit",
"purple" : "#75507B",
"red" : "#CC0000",
"white" : "#D3D7CF",
"yellow" : "#C4A000"
}

Note also that this should be the beginning of a wonderful Windows Console ecosystem. This isn't the one terminal to end them all, it's the one to start them all. I've loved alternative consoles for YEARS, whether it be ConEmu or Console2 many years ago, I've long declared that Text Mode is a missed opportunity.

Remember also that Terminal !== Shell and that you can bring your shell of choice into your Terminal of choice! If you want the deep architectural dive, be sure to watch the BUILD 2019 technical talk with some of the developers or read about ConPTY and how to integrate with it!


Sponsor: Get the latest JetBrains Rider with WinForms designer, Edit & Continue, and an IL (Intermediate Language) viewer. Preliminary C# 8.0 support, rename refactoring for F#-defined symbols across your entire solution, and Custom Themes are all included.



© 2018 Scott Hanselman. All rights reserved.
     

Azure Cosmos DB: A competitive advantage for healthcare ISVs

$
0
0

This blog was co-authored by Shweta Mishra, Senior Solutions Architect, CitiusTech and Vinil Menon, Chief Technology Officer, CitiusTech

CitiusTech is a specialist provider of healthcare technology services which helps its customers to accelerate innovation in healthcare. CitiusTech used Azure Cosmos DB to simplify the real-time collection and movement of healthcare data from variety of sources in a secured manner. With the proliferation of patient information from established and current sources, accompanied with scrupulous regulations, healthcare systems today are gradually shifting towards near real-time data integration. To realize such performance, healthcare systems not only need to have low latency and high availability, but should also be highly responsive. Furthermore, they need to scale effectively to manage the inflow of high speed, large volumes of healthcare data.

The situation

The rise of Internet of Things (IoT) has enabled ordinary medical devices, wearables, traditional hospital deployed medical equipment to collect and share data. Within a wide area network (WAN), there are well defined standards and protocols, but with the ever increasing number of devices getting connected to the internet, there is a general lack of standards compliance and consistency of implementation. Moreover, data collation and generation from IoT enabled medical/mobile devices need specialized applications to cope with increasing volumes of data.

This free-form approach provides a great deal of flexibility, since different data can be stored in document oriented stores as business requirements change. Relational databases aren’t efficient in performing CRUD operations on such data but are essential for handling transactional data where consistent data integrity is necessary. Different databases are designed to solve different problems, using a single database engine for multiple purposes usually leads to non-performant solutions. Whereas management of multiple types of databases is an operational overhead.

Developing distributed global scale solutions are challenged by the capability and complexity of scaling databases across multiple regions without compromising performance, and while complying with data sovereignty needs. This often leads to inefficient management of multiple regional databases and/or underperformance.

Solution

Azure Cosmos DB has the ability of polyglot persistence, which allows it to use a mix of data store technologies without compromising on performance. It is a multi-model, highly-available, globally scalable database which supports proven low latency reads and writes. Azure Cosmos DB has enterprise grade security features and keeps all data encrypted at rest.

Azure Cosmos DB is suited for distributed global scale solutions as it not only provides a turnkey global distribution feature but can geo-fence a database to specific regions to manage data sovereignty compliance. Its multi-master feature allows writes to be made and synchronized across regions with guaranteed consistency. In addition, it supports multi-document transactions with ACID guarantees.

Use cases in healthcare

Azure Cosmos DB works very well for the following workloads.

1. Global scale secure solutions

Organizations like CitiusTech that offer a mission-critical, global-scale solution should consider Azure Cosmos DB a critical component of their solution stack. For example, An ISV developing a non-drug treatment for patients through a medical device at a facility can develop web or mobile applications which store the treatment information and medical device metadata in Azure Cosmos DB. Treatment information can be pushed to medical devices at global facilities for the treatment. ISVs can comply to the compliance requirement by using geo-fencing feature.

Azure Cosmos DB can also be used as a multi-tenant database with carefully designed strategy. For instance, if a tenant has different scaling requirements, different Azure Cosmos containers can be created for such tenants. In Azure Cosmos DB, containers serve as logical units of distribution and scalability. Multi-tenancy may be possible at a partition level within an Azure Cosmos container, but needs to be designed carefully to avoid creating hot-spots and compromising the overall performance.

2. Real-time location system, Internet of Things

Azure Cosmos DB is effective for building a solution for real-time tracking and management of medical devices and patients, as it often requires rapid velocity of data, scale, and resilience. Azure Cosmos DB supports low latency writes and reads so that all data is replicated across multiple fault and update domains in each region for high availability and resilience. It supports session consistency as one of its five consistency levels which is suitable for such scenarios. Session consistency guarantees strong consistency within a session.

Using Azure Cosmos DB also allows scaling of processing power, this is useful for burst scenarios and also provides elastic scale petabytes of storage. This enables request units (RU’s) to be programmatically adjusted as per the workload.

CitiusTech worked with a leading provider of medical grade vital signs and physiological monitoring solution to build a medical IoT based platform with the following requirements:

  • Monitor vitals with medical quality
  • Provide solutions for partners to integrate custom solutions
  • Deliver personalized, actionable insights
  • Messages and/or device generated data don’t have a fixed structure and may change in the future
  • Data producer(s) to simultaneously upload data for at least 100 subjects in less than two seconds per subject, receiving no more than 40*21=840 data points per subject, per request
  • Data consumer(s) to read simultaneously, data of at least 100 subjects in less than two seconds, producing no more than 15,000 data points per data consumer
  • Data for most recent 14 days shall be ready to be queried, and data older than 14 days to be moved to a cold storage

CitiusTech used Azure Cosmos DB as a hot storage to store health data, since it enabled low latency writes and reads of health data that was generated by the wearable sensor continuously. Azure Cosmos DB provided schema agnostic flexible storage to store documents with different shapes and size at scale and allowed enterprise grade security with Azure compliance certification.

The time to live (TTL) feature in Azure Cosmos DB automatically deleted expired items based on the TTL value. It was geo-distributed with its geo-fencing feature to address data sovereignty compliance requirements.

Solution architecture

Diagram showing architecture of data flow in CitiusTech's solution using Azure Cosmos DB

Architecture of data flow in CitiusTech’s solution using Azure Cosmos DB

Key insights

Azure Cosmos DB unlocks the potential of polyglot persistence for healthcare systems to integrate healthcare data from multiple systems of record. It also ensures the need for flexibility, adaptability, speed, security and scale in healthcare is addressed while maintaining low operational overheads and high performance.

About CitiusTech

CitiusTech is a specialist provider of healthcare technology services and solutions to healthcare technology companies, providers, payers and life sciences organizations. CitiusTech helps customers accelerate innovation in healthcare through specialized solutions, healthcare technology platforms, proficiencies and accelerators. Find out more about CitiusTech.

Azure.Source – Volume 88

$
0
0

News and updates

Announcing native backup for SQL Server 2008 end of support in Azure

With SQL Server 2008 and 2008 R2 approaching end of support, many customers are moving to Azure. They see this milestone as an opportunity to reimagine and transform their infrastructure with the power of cloud computing. Azure’s offer of free extended security updates for three years provides a new lease on life to these servers while giving organizations time to upgrade. Learn how easy it is to protect your SQL databases in Azure.

Microsoft and Truffle partner to bring a world-class experience to blockchain developers

Last month, Microsoft released Azure Blockchain Service making it easy for anyone to quickly setup and manage a blockchain network and providing a foundation for developers to build a new class of multi-party blockchain applications in the cloud. To enable end-to-end development of these new apps, we’ve collaborated with teams from Visual Studio Code to Azure Logic Apps and Microsoft Flow to Azure DevOps, to deliver a high-quality experience that integrates Microsoft tools developers trust and open-source tools they love. Now we have doubled down on our relationship by announcing an official partnership between our organizations to bring Truffle blockchain tools for developer experience and DevOps to Microsoft Azure.

Now available

Azure and Office 365 generally available today, Dynamics 365 and Power Platform available by end of 2019

Microsoft Azure and Microsoft Office 365 are taking a major step together to help support the digital transformation of our customers. Both Azure and Office 365 are now generally available from our first cloud datacenter regions in the Middle East, located in the United Arab Emirates (UAE). Dynamics 365 and Power Platform, offering the next generation of intelligent business applications and tools, are anticipated to be available from the cloud regions in UAE by the end of 2019.

In preview

Introducing next generation reading with Immersive Reader, a new Azure Cognitive Service

We’re unveiling the preview of Immersive Reader, a new Azure Cognitive Service in the Language category. Developers can now use this service to embed inclusive capabilities into their apps for enhancing text reading and comprehension for users regardless of age or ability. No machine learning expertise is required. Based on extensive research on inclusivity and accessibility, Immersive Reader’s features are designed to read the text aloud, translate, focus user attention, and much more. Immersive Reader helps users unlock knowledge from text and achieve gains in the classroom and office.

Announcing the preview of Microsoft Azure Bastion

For many customers around the world, securely connecting from the outside to workloads and virtual machines on private networks can be challenging. Exposing virtual machines to the public Internet to enable connectivity through Remote Desktop Protocol (RDP) and Secure Shell (SSH), increases the perimeter, rendering your critical networks and attached virtual machines more open and harder to manage. To connect to their virtual machines, most customers either expose their virtual machines to the public Internet or deploy a bastion host, such as jump-server or jump-boxes. So we’re excited to announce the preview of Azure Bastion, a new managed PaaS service that provides seamless RDP and SSH connectivity to your virtual machines over the Secure Sockets Layer (SSL).

Virtual machine scale set insights from Azure Monitor

In October 2018 we announced the public preview of Azure Monitor for Virtual Machines (VMs). At that time, we included support for monitoring your virtual machine scale sets from the at scale view under Azure Monitor. Now Today we are announcing the public preview of monitoring your Windows and Linux VM scale sets from within the scale set resource blade. This blog highlights several enhancements.

Technical content

Using Azure Search custom skills to create personalized job recommendations

The Microsoft Worldwide Learning Innovation lab is an idea incubation lab within Microsoft that focuses on developing personalized learning and career experiences. One of the recent experiences that the lab developed focused on offering skills-based personalized job recommendations. Research shows that job search is one of the most stressful times in someone’s life. Everyone remembers at some point looking for their next career move and how stressful it was to find a job that aligns with their various skills. Harnessing Azure Search custom skills together with our library of technical capabilities, we were able to build a feature that offers personalized job recommendations based on identified capabilities from resumes.

Azure Stack IaaS – part ten

One of the best things about running your VMs in Azure or Azure Stack is you can begin to modernize around your virtual machines (VMs) by taking advantage of the services provided by the cloud. Platform as a Service (PaaS) is the term often applied to the capabilities that are available to your application to use without the burden of building and maintaining these capabilities yourself. Actually, cloud-IaaS itself is a PaaS since you do not have to build or maintain the underlying hypervisors, software defined network and storage, or even the self-service API and portal. Furthermore, Azure and Azure Stack gives you PaaS services which you can use to modernize your application. In this article we will explore how you can modernize your application with web apps, serverless functions, blob storage, and Kubernetes as part of your Journey to PaaS.

Getting Started with Azure Machine Learning service with Visual Studio Code | Azure Tips and Tricks

In Azure, you can create complex machine learning models and train them with data in a Machine Learning Service workspace. This is a workspace where you can manage all of your machine learning tools and assets, like experiments, models, scripts and model deployments. And you can use the workspace to share your machine learning work with other data scientists in your team. In the Machine Learning Service workspace, you can. Let's get started with Azure Machine Learning for VS Code and the Azure Machine Learning Service works

Azure Shows

Azure tips and tricks for Visual Studio 2019 | Azure Friday

Learn Michael Crump's latest Azure tips and tricks that will help you be more productive working with Azure in Visual Studio 2019.

.NET Core 3.0 with Scott Hunter | On .NET

.NET Core 3 will be a major milestone with tons of new features, performance updates and support for new workloads. In this episode, Richard Lander and Scott Hunter get together to discuss some of the highlights that developers can look forward to in this new release.

Server-side Blazor in .NET Core 3.0 | On .NET

In this episode, Shayne Boyer sits down with Daniel Roth to get an understanding of what Blazor is and what benefits does it bring to the table for building web applications.

Five things you didn’t know Python could do | Five Things

This week, Python (the language, not the snake) aficionado Nina Zakharenko joins us for Five Things that you didn't know that Python can do. And don't worry, there are plenty of snake references and even a free potato joke. Also, Burke finds snake facts on the internet and Nina tries her first Goo Goo Cluster.

All about Rust in real life: Linkerd 2.0 | The Open Source Show

Oliver Gould, CTO at Buoyant and one of the creators of Linkerd, joins Lachie Evenson to talk Rust. One of StackOverflow's most loved programming languages for the fourth year running. Specifically, how and why Linkerd rewrote 2.0 in Rust, what's changed over the years, and get Oliver's tips for navigating tooling, package management, release channels, and more.

Azure IoT Edge development with Azure DevOps | Internet of Things Show

The Internet of Things is a technology paradigm that involves the use of internet connected devices to publish data often in conjunction with real-time data processing, machine learning, and/or storage services. We will examine IoT Edge Solutions using Azure DevOps, Application Insights, Azure Container Registries, containerized IoT edge devices and Azure Kubernetes Service to create an end-to-end pipeline which deploys, smoke tests, and allows for scalable integration testing using replica sets in k8s.

Eric Fleming on middle-of-the-day deployments | Azure DevOps podcast

Today’s episode is all about recognizing middle-of-the-day deployments. How teams such as Netflix, Facebook, and even the Azure DevOps Product Team are doing them; and taking a look at how other teams can achieve that for themselves!

SCCM and Intune—now managing 175 million devices!

How to Upgrade to TypeScript without anybody noticing, Part 2

$
0
0

This guide will show you how to fix Typescript compile errors in Javascript project that recently added Typescript support via a tsconfig.json. It assumes that the tsconfig.json is configured according to the description in part 1 of this post, and that you also installed types for some of your dependencies from the @types/* namespace. This guide is more of a list of tasks that you can pick and choose from, depending on what you want to fix first. Here are the tasks:

  1. Add missing types in dependencies.
  2. Fix references to types in dependencies.
  3. Add missing types in your own code.
  4. Work around missing types in dependencies.
  5. Fix errors in existing types.
  6. Add type annotations to everything else.

This guide does not teach you how to write type definitions. The Declaration section of the Typescript handbook is the best place to learn about that. Here, you’ll just see types presented without a lot of explanation.

Add missing types in dependencies

Let’s start with @types/shelljs. In Makefile.js, I see a few errors. The first is that the module require('shelljs/make') isn’t found. The second group of errors is that the names ‘find’, ‘echo’ and a few others aren’t found.

These errors are related. It turns out that @types/shelljs doesn’t even include shelljs/make.d.ts right now. It’s completely missing. If you look at the sourceshelljs/make.js does two things:

  1. Add the contents of the parent shelljs to the global scope.
  2. Add a global object named ‘target’ that allows you to add make targets.

Let’s say you want to add make to Definitely Typed so that it is available in @types/shelljs. Your first step is to create make.d.ts inside node_modules/@types/shelljs/. This is the wrong location — it’s inside your own node_modules folder — but it makes development super easy to test that you’re actually adding the missing stuff. You can create a proper PR after everything is working.

Start with this:

import * as shelljs from './';
declare global {
  const cd: typeof shelljs.cd;
  const pwd: typeof shelljs.pwd;
  // ... all the rest ...
}

This copies all of shelljs’ contents into the global namespace. Then add the type for target to the globals as well:

const target: {
  all?: Target;
  [s: string]: Target;
}
interface Target {
  (...args: any[]): void;
  result?: any;
  done?: boolean;
}

This exposes a couple more errors. See the section on fixing errors in existing types for how to fix those.

Now we want to publish this to Definitely Typed:

  1. Fork DefinitelyTyped on github.
  2. git clone https://github.com/your-name-here/DefinitelyTyped
  3. cp node_modules/@types/shelljs/make.d.ts ~/DefinitelyTyped/types/shelljs/
  4. git checkout -b add-shelljs-make
  5. Commit the change and push it to your github fork.
  6. Create a PR for the change.

If there are lint problems, the CI run on Definitely Typed will catch them.

For more detail on writing definitions for Definitely Typed, see the Declaration section of the Typescript handbook.

Fix references to types in dependencies

typescript-eslint-parser actually has quite a bit of type information in its source already. It just happens to be written in JSDoc, and it’s often almost, but not quite, what Typescript expects to see. For example, in analyze-scope.jsvisitPattern has an interesting mix of types:

/**
 * Override to use PatternVisitor we overrode.
 * @param {Identifier} node The Identifier node to visit.
 * @param {Object} [options] The flag to visit right-hand side nodes.
 * @param {Function} callback The callback function for left-hand side nodes.
 * @returns {void}
 */
visitPattern(node, options, callback) {
    if (!node) {
        return;
    }

    if (typeof options === "function") {
        callback = options;
        options = { processRightHandNodes: false };
    }

    const visitor = new PatternVisitor(this.options, node, callback);
    visitor.visit(node);

    if (options.processRightHandNodes) {
        visitor.rightHandNodes.forEach(this.visit, this);
    }
}

In the JSDoc at the start, there’s an error on Identifier. (Object and Function are fine, although you could write more specific types.) That’s weird, because those types do exist in estree. The problem is that they’re not imported. Typescript lets you import types directly, like this:

import { Identifier, ClassDeclaration } from "estree";

But this doesn’t work in Javascript because those are types, not values. Types don’t exist at runtime, so the import will fail at runtime when Identifier is not found. Instead, you need to use an import type. An import type is just like a dynamic import, except that it’s used as a type. So, just like you could write:

const estree = import("estree);

to dynamically import the Identifier type from estree, you can write:

/** @type {import("estree").Identifier */
var id = ...

to import the type Identifier without an import statement. And, because it’s inconvenient to repeat import all over the place, you usually want to write a typedef at the top of the file:

/** @typedef {import("estree").Identifier} Identifier */

With that alias in place, references to Identifier resolve to the type from estree:

/**
 * @param {Identifier} node now has the correct type
 */

Here’s the commit.

Add missing types in your own code

Fixing these types still leaves a lot of undefined types in analyze-scope.js. The types look like estree types, but they’re prefixed with TS-, like TSTypeAnnotation and TSTypeQuery. Here’s where TSTypeQuery is used:

/**
 * Create reference objects for the object part. (This is `obj.prop`)
 * @param {TSTypeQuery} node The TSTypeQuery node to visit.
 * @returns {void}
 */
TSQualifiedName(node) {
    this.visit(node.left);
}

It turns out that these types are specific to typescript-eslint-query. So you’ll have to define them yourself. To start, define the typedefs you need as any. This gets rid of the errors at the cost of accuracy:

/** @typedef {any} TSTypeQuery */
// lots more typedefs ...

At this point, you have two options: bottom-up discovery of how the types are used, or top-down documentation of what the types should be.

Bottom-up discovery, which is you’ll see below, has the advantage that you will end up with zero compile errors afterward. But it doesn’t scale well; when a type is used throughout a large project, the chances of it being misused are pretty high.

Top-down documentation works well for large projects that already have some kind of documentation. You just need to know how to translate documentation into Typescript types; the Declaration section of the Typescript handbook is a good starting point for this. You will sometimes have to change your code to fit the type using the top-down approach as well. Most of the time that’s because the code is questionable and needs to be changed, but sometimes the code is fine and the compiler gets confused and has to be placated.

Let’s use bottom-up discovery in this case because it looks like top-down documentation would involve copying the entire Typescript node API into analyze-scope.js. To do this, change the typedefs one by one to ‘unknown’, and look for errors that pop up. For example:

/** @typedef {unknown} TSTypeQuery */

Now there’s an error is on the usage of TSTypeQuery in TSQualifiedNamenode.left:

/**
 * Create reference objects for the object part. (This is `obj.prop`)
 * @param {TSTypeQuery} node The TSTypeQuery node to visit.
 * @returns {void}
 */
TSQualifiedName(node) {
    this.visit(node.left);
    //              ~~~~
    // error: type 'unknown' has no property 'left'
}

Looks like TSTypeQuery is supposed to have a left property, so change TSTypeQuery from unknown to { left: unknown }. There’s no more indication of what the type of left is, so leave it as unknown:

/** @typedef {{ left: unknown }} TSTypeQuery */

As you can see, bottom-up type discovery can be a bit unsatisfying and underspecified, but it’s less disruptive to existing code. Here’s the commit.

Work around missing types

Sometimes you’ll find that one of your dependencies has no @types package at all. You are free to define types and contribute them to Definitely Typed, of course, but you usually need a quick way to work around missing dependencies. The easiest way is to add your own typings file to hold workarounds. Let’s look at visitPattern in analyze-scope.js again:

/**
 * Override to use PatternVisitor we overrode.
 * @param {Identifier} node The Identifier node to visit.
 * @param {Object} [options] The flag to visit right-hand side nodes.
 * @param {Function} callback The callback function for left-hand side nodes.
 * @returns {void}
 */
visitPattern(node, options, callback) {
    if (!node) {
        return;
    }

    if (typeof options === "function") {
        callback = options;
        options = { processRightHandNodes: false };
    }

    const visitor = new PatternVisitor(this.options, node, callback);
    visitor.visit(node);

    if (options.processRightHandNodes) {
        visitor.rightHandNodes.forEach(this.visit, this);
    }
}

Now there is an error on

const visitor = new PatternVisitor(this.options, node, callback)
                ~~~~~~~~~~~~~~~~~~
                Expected 0 arguments, but got 3.

But if you look at PatternVisitor in the same file, it doesn’t even have a constructor. But it does extend OriginalPatternVisitor:

const OriginalPatternVisitor = require("eslint-scope/lib/pattern-visitor");
// much later in the code...
class PatternVisitor extends OriginalPatternVisitor {
    // more code below ...
}

Probably OriginalPatternVisitor has a 3-parameter constructor which PatternVisitor inherits. Unfortunately, eslint-scope doesn’t export lib/pattern-visitor, so PatternVisitor doesn’t get the 3-parameter constructor. It ends up with a default 0-parameter constructor.

As described in “Add missing types in dependencies”, you could add OriginalPatternVisitor in lib/pattern-visitor.d.ts, just like we did for make.d.ts in shelljs. But when you’re just getting started, sometimes you just want to put a temporary type in place. You can add the real thing later. Here’s what you can do:

  1. Create types.d.ts at the root of typescript-eslint-parser.
  2. Add the following code:
declare module "eslint/lib/pattern-visitor" {
    class OriginalPatternVisitor {
        constructor(x: any, y: any, z: any) {
        }
    }
    export = OriginalPatternVisitor;
}

This declares an “ambient module”, which is a pompous name for “fake workaround module”. It’s designed for exactly this case, though, where you are overwhelmed by the amount of work you need to do and just want a way to fake it for a while. You can even put multiple declare modules in a single file so that all your workarounds are in one place.

After this, you can improve the type of OriginalPatternVisitor in the same bottom-up or top-down way that you would improve any other types. For example, you can look at pattern-visitor.js in eslint to find the names of the constructor parameters. Then, a little lower in the Identifier method of OriginalPatternVisitor there is a usage of callback that gives enough information to guess its type.

Here’s what you’ll end up with:

declare module "eslint-scope/lib/pattern-visitor" {
    import { Node } from "estree";
    type Options = unknown;
    class OriginalPatternVisitor {
        constructor(
            options: Options,
            rootPattern: Node,
            callback: (pattern: Node, options: Options) => void);
    }
    export = OriginalPatternVisitor;
}

Fix errors in existing types

Unfortunately, the improved type for pattern-visitor once again causes an error on new PatternVisitor. This time, the callback’s type, Function isn’t specific enough to work with the specific function type of the callback:

callback: (pattern: Node, options: Options) => void

So the existing JSDoc type annotation needs to change:

/**
 * Override to use PatternVisitor we overrode.
 * @param {Identifier} node The Identifier node to visit.
 * @param {Object} [options] The flag to visit right-hand side nodes.
 * @param {Function} callback The callback function for left-hand side nodes.
 * @returns {void}
 */
visitPattern(node, options, callback) {

The right fix is to change the type Function to the more precise (pattern: Node, options: Options) => void:

/**
 * Override to use PatternVisitor we overrode.
 * @param {Identifier} node The Identifier node to visit.
 * @param {Object} [options] The flag to visit right-hand side nodes.
 * @param {(pattern: Node, options: import("eslint-scope/lib/pattern-visitor").Options) => void} callback The callback function for left-hand side nodes.
 * @returns {void}
 */
visitPattern(node, options, callback) {

Add JSDoc types to everything else

Once you get all the existing type annotations working, the next step is to add JSDoc types to everything else. You can turn on "strict": true to see how far you have to go — among other things, this marks any variables that have the type any with an error.

You should fall into a back-and-forth of adding new JSDoc type annotations and fixing old types. Usually old types just need to be updated to work with Typescript, but sometimes you’ll find a bug.

The post How to Upgrade to TypeScript without anybody noticing, Part 2 appeared first on TypeScript.

Java on Visual Studio Code June Update

$
0
0

Welcome to the June update of Java on Visual Studio Code!

Earlier this month, we shared our new Java Installer for Visual Studio Code, which aims to help new Java developers to get their environment ready and start coding in just a few clicks. In this update, we’d like to share a couple new features and enhancements delivered during last few weeks.

More code actions

Developers need refactoring and code actions to achieve high productivity, so we’re bringing more of those features to you.

Enhanced “Generate getters and setters”

In addition to bulk generate getters and setters for all member variables, if the class has more than one field, the source action will also prompt a quick pick box which allows you to select the target fields to generate the accessor methods.

The source action is also aware of the java.codeGeneration.generateComments preference and will use it to decide whether to generate comments for getter and setter methods.

Generate Delegate Methods

This new code action enables generating delegate methods.

Generate Constructor

This source action helps adding constructor from super class.

Assign parameter to new field

This source action assigns parameter to new field for unused constructor parameter(s).

Performance improvements

A set of changes have been made to further improve performance of Java in Visual Studio Code, including a fix for I/O issue on Windows platform, reducing memory footprint for large projects with deep modules and batch project imports. VS Code is a lightweight editor and we’d like to make sure it still feels just like an editor when despite more and more features being added to it.

Debugger updates

Debugging is the most frequent used feature second only to code editing. We’d like you to enjoy debugging Java in Visual Studio Code.

Show more meaningful value in variable window and hover tool-tip

We’re now providing additional detailed information for variables during debug

  • For the classes that override ‘toString‘ method, show toString() details.
  • For Collection and Map classes, show an additional size=x details.
  • For Entry, show key:value details

New HCR button

To better expose the hot code replace feature and let you control it more explicitly, we’ve added a new button to the toolbar and provided a new debug setting java.debug.settings.hotCodeReplace to allow you control how to trigger HCR. Default to manual.

  • manual – Click the toolbar to apply the change to running app

  • auto – Automatically apply the changes after compilation. This is the old behavior.
  • never – Never apply the changes

See HCR in action

Global setting for selecting debug console

While VS Code offers a powerful Debug Console with REPL (Read-Eval-Print Loop) functionality, one major restriction of it is it doesn’t accept input. For those programs which need to take console input, developers need to specify to use integratedTerminal instead of internalConsole in launch.json.

"console": "integratedTerminal"

However, this is not convenient if you need to do it repeatedly. Now we’re introducing a global setting, java.debug.settings.console. you can use this setting to configure the default debug console so you don’t need to change the launch.json every time.

"java.debug.settings.console": "integratedTerminal"

Other updates

Maven

2 new configs are now available for Maven extension

  1. pomfile.globPattern – specified how the extension search for POM file.
  2. pomfile.autoUpdateEffectivePOM – specifies whether to update Effective-POM automatically.
Test Runner

In recent releases, we’ve added support for a couple additional JUnit5 annotations, such as @Nested and @TestFactory. Test runner will also automatically show the test report after execution now.

Sign up

If you’d like to follow the latest of Java on VS Code, please provide your email with us using the form below. We will send out updates and tips every couple weeks and invite you to test our unreleased feature and provide feedback early on.

Try it out

Please don’t hesitate to give it a try! Your feedback and suggestions are very important to us and will help shape our product in future.

The post Java on Visual Studio Code June Update appeared first on The Visual Studio Blog.

Using natural language processing to manage healthcare records

$
0
0

The next time you see your physician, consider the times you fill in a paper form. It may seem trivial, but the information could be crucial to making a better diagnosis. Now consider the other forms of healthcare data that permeate your life—and that of your doctor, nurses, and the clinicians working to keep patients thriving. Forms and diagnostic reports are just two examples. The volume of such information is staggering, yet fully utilizing this data is key to reducing healthcare costs, improving patient outcomes, and other healthcare priorities. Now, imagine if artificial intelligence (AI) can be used to help the situation.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how SyTrue, a Microsoft partner focusing on healthcare uses Azure to empower healthcare organizations to improve efficiency, reduce costs, and improve patient outcomes.

Billions of records

Valuable insights remain locked in unstructured medical records such as scanned documents in PDF format that, while human-readable, present a major obstacle to the automation and analytics required. Over four billion medical notes are created every year. The clinical and financial insights embodied within these records are needed by an average of 20+ roles and processes downstream of the record generation. Currently, healthcare providers and payors require an army of professionals to read, understand, and extract healthcare data from the flood of clinical documents generated every day. But success has been elusive.

It's not for lack of trying. In the last decade, an effort was made to accumulate and upload data into electronic health records (EHR) systems. Meaningful Use is a government-led incentive program that aims to accelerate the movement from hard-copy filing systems to electronic health records. Still, the problem is related to the volume and the lack of time and resources to assimilate masses of data.

Note: the Meaningful Use program has a number of goals. An important one is, “Ensure adequate privacy and security protection for personal health information.” Data security is a prime value for Azure services. Data services such as Azure SQL Database encrypt data at rest and in-transit.

Moving the needle on healthcare

As costly and extensive as this effort was, many believe that we have yet to see evidence of any significant impact from the digitization of healthcare data to the quality or cost of care. One way to radically improve this is using AI for natural language processing (NLP)—specifically to automate reading of the documents. That enables subsequent analytics, yielding the most relevant actionable information in near real-time from mountains of documents to the medical professional. It empowers them to deliver better quality care, more efficiently, at lower cost.

In action

A Microsoft partner, SyTrue is leading the way. In the words of their Founder and CEO, Kyle Silvestro, “At SyTrue, the next big challenge is accessing this vast pool of accumulated patient data in a serviceable way. We’ve created a platform that transforms healthcare documentation into actionable information. The focus is on three main features: speed, context, and adaptability. Our technology consumes thousand-paged medical records in sub-seconds. The innovation is built on informational models that can ingest data from multiple types of clinical and financial health care organizations. This allows diverse healthcare stakeholders to use the system. The main objective for the technology is to present key clinical and financial insights to healthcare stakeholders in order to reduce waste and improve clinical outcomes.”

Informed by natural language processing and machine learning

SyTrue relies on NLP and machine learning (ML) as the underlying technology. Using their own proprietary methods, they perform “context-driven information extraction.” In other words, they connect the dots. The graphic below shows their processes.

Diagram displaying context-driven information extraction

Improving healthcare

SyTrue’s offers the NLP OS (Operating System) for healthcare. It aids in several ways.

  • It unlocks healthcare records and enables healthcare professionals to interact with medical record data and its clinical and financial implications. Specifically, it eliminates the need for professionals to hunt for the same key observations. This enables professionals to spend more time focused on patient care.
  • NLP OS also bridges the communication between a specialist provider and a primary care physician regarding the care of a shared patient. The system extracts and highlights continuity of care recommendations generated within the patient’s care team.
  • A large healthcare organization installed SyAudit, powered by SyTrue NLP OS, at the front of their medical chart review process. Before the charts reach a nurse-reviewer, they are processed through this solution. The system interprets the documentation to determine if a nurse review is in fact needed, or if the documentation lacks actionable information. This potentially decreases the time spent by nurse reviewers.
  • A healthcare provider used SyReview, another SyTrue solution powered by the SyTrue NLP OS, for their quality capturing and reporting process. The particular process is related to an incentive program which directly ties quality to Medicare payment. Automating the quality-capturing process strengthens the feedback loop to providers that needed to show improvement. The organization also eliminated its manual quality-capture process, which was slow, expensive, and often inaccurate.

Next steps

To see more about Azure in the healthcare industry see Azure for health.

To find out more about this solution, go to the Azure Marketplace listing for NLP OS™ for Healthcare and click Contact me.

Cppp 2019 Trip Report

$
0
0

Summary

CPPP is a new C++ conference in Paris, France. Its first iteration ran for a single day with three parallel tracks, drawing in 160 attendees.

The conference great on all fronts: the speakers & talks were varied and high-quality, the venue was right next to the Eiffel Tower and had plenty of space, the food was tasty and varied (shoutout to the cream filled pastries), and the day went smoothly with strong communication from the organisers (Joel Falcou and Fred Tingaud).

The three tracks were themed on roughly beginner, intermediate, and expert content, where the beginner track was in French and the other two were in English.

My Talk

C:UsersAdministratorAppDataLocalMicrosoftWindowsINetCacheContent.MSO9588AC77.tmp

Photo by @Winwardo

My talk was named “Tools to Ease Cross-Platform C++ Development”. I tried something a bit different from other cross-platform talks we’ve given, in that I tried to develop a cross-platform application live rather than demoing different features one-by-one.

I wrote a whole Brainfuck-to-x64 compiler in Visual Studio during the talk which targeted Windows and Linux (through the WSL configuration in VS) and used Vcpkg to fulfill a dependency on fmtlib. The compiler worked first time as well! You can find the code and slides on GitHub.

 

Talks I Attended

Kate Gregory – Emotional Code

C:UsersAdministratorAppDataLocalMicrosoftWindowsINetCacheContent.MSO4056049D.tmp

Photo by @branaby

After some pastries and an introduction from the organisers, we began with a keynote from Kate Gregory on Emotional Code. This was the third time I’d seen a version of this talk live (once at C++ on Sea and again at ACCUConf), but it was still very enjoyable this time round and had some new content to make it worthwhile.

As programmers, we can be under the belief that code is neutral and emotionless, but Kate argues that this is not the case, and that the code you write can reflect a lot about the environment in which you work. I find this talk illuminating each time I watch it, I’d recommend giving it a try and thinking about how your work situation can be improved to make your code better. Kate is also one of those speakers who I could watch talk about her teapot collection (I don’t know if she has a teapot collection) and not get bored, so if nothing else you’ll have a good way to pass an hour.

Mock Interviews

I gave my talk after Kate’s, after which I napped on a couch to recover somewhat before helping out with the mock interviews. I and some other experienced interviewers had a series of 20 min talks with people looking to improve their interview skills. This session wasn’t very well attended, but I think those who came found it very valuable. Having run a similar event at CppCon, I think these are wonderful opportunities for people to get in some practice before trying to get jobs, so I’d highly recommend looking out for them when you’re at an event.

Patricia Aas – Anatomy of an Exploit

C:UsersAdministratorAppDataLocalMicrosoftWindowsINetCacheContent.MSO7FE2DDB3.tmp

Photo by @a_bigillu

Patricia’s talks are always energetic and engaging, with beautiful slides and keen insights. This one was no different, even if I was exhausted by this point of the day and got told off for nodding off in the front row (sorry Patricia!).

This was an introduction into how code exploits work by breaking the program out of the world of normal behaviour and into the upside-down of The Weird, then how execution is controlled in this bizarre state. It’s is a great first step into the technical details of software vulnerabilities, so give it a watch if you’re interested in learning about this area.

Ben Deane – Identifying Monoids: Exploiting Compositional Structure in Code

C:UsersAdministratorAppDataLocalMicrosoftWindowsINetCacheContent.MSO88B21539.tmp

Photo by @hankadusikova

I was particularly interested in seeing this talk, since it’s essentially a one-hour answer to a question I asked Ben in one of his CppCon 2018 talks. I wasn’t disappointed.

The core of Ben’s presentation was that identifying monoids (a set along with a binary operation which is closed and associative, e.g. the set of integers under integer addition) in your types allows you to:

  1. Expose the underlying structures in your code
  2. Exploit these structures to make your types and operations more clear and composable

He took a very practical code-based approach, so the talk is very accessible for people who have found some of the mathematical underpinnings which he’s talking about difficult to understand.

Next Year

Next year it will run for two days and I expect a stronger turnout due to the success of its first run. I’d highly recommend going along and hope to see you there!

The post Cppp 2019 Trip Report appeared first on C++ Team Blog.


Visual Studio tips and tricks

$
0
0

Whether you are new or have been using Visual Studio for years, there are a bunch of tips and tricks that can make you more productive. We’ve been sharing tips on Twitter using the #vstip hashtag for a while, and this is a collection of the best ones so far.

Debugger

Hitting F10 to build, run, and attach debugger instead of F5 will automatically break on the first time your own code is being executed. No breakpoints needed.

Supported from Visual Studio 2005

 

Reattach to process (Shift+Alt+P) is extremely helpful when you have to attach to the same process again and again.

Supported from Visual Studio 2017 v15.8

 

A blue dot in the margin indicates a switch of threads while stepping through debugging.

Supported from Visual Studio 2013

Solution

Improve performance of solution load and reduce visual noise by disabling restore of node expansions in Solution Explorer as well as Reopen documents on solution load.

Supported from Visual Studio 2019

 

For fast keyboard navigation, use Ctrl+T to find anything in your solution – files, classes etc.

Supported from Visual Studio 2017

 

Assign a keyboard shortcut to perform a “git pull” so you don’t have to use CLI or Team Explorer to ensure your repo is up to date.

Supported in Visual Studio 2019

 

Make Solution Explorer automatically select the current active document, so you never lose track of its location in the project.

Supported from Visual Studio 2010

Editor

Easily surround HTML elements with a <div> using Shift+Alt+W. The inserted <div> is selected so you can easily edit it to be any tag you’d like, and the end-tag matches up automatically.

Supported from Visual Studio 2017

 

Copy any JSON fragment to the clipboard and paste it as strongly typed .NET classes into any C# or VB code file.

Supported from Visual Studio 2013

 

You don’t need to write quotation marks around JSON property names, simply type a colon and Visual Studio will insert the quotes automatically.

Supported in Visual Studio 2015

 

Make IntelliSense and tooltips semi-transparent for the duration you press and hold the Control key.

Supported from Visual Studio 2010

 

Instead of retyping ‘(‘ to show parameter info in method signatures, use Ctrl+Shift+Space to display the currently used overload.

Supported from Visual Studio 2010

 

Miscellaneous

Play a sound when certain events occur within Visual Studio.

Supported from Visual Studio 2010

 

Create custom window layouts for specific development scenarios or monitor setups and switch between them easily.

Supported from Visual Studio 2017

 

Specify which Visual Studio components are required for any solution, and Visual Studio will prompt the user to install them if missing. Read more in the blog post Configure Visual Studio across your organization with .vsconfig.

Supported from Visual Studio 2019

Extensions

Visual Studio Spell Checker. An editor extension that checks the spelling of comments, strings, and plain text as you type or interactively with a tool window. It can also spell check an entire solution, project, or selected items. Options are available to define multiple languages to spell check against.

Supported from Visual Studio 2013

 

Add New File. A Visual Studio extension for easily adding new files to any project. Simply hit Shift+F2 to create an empty file in the selected folder or in the same folder as the selected file.

Supported from Visual Studio 2015

 

Git Diff Margin. Git Diff Margin displays live Git changes of the currently edited file on Visual Studio margin and scroll bar.

Supported from Visual Studio 2012

This was just a few of the thousands of available extensions. To see more extensions, go to the Visual Studio Marketplace.

In closing

These were just a few hand-picked tips from the #vstip hashtag on Twitter. There are plenty more to check out. If you have some great tips, please share them using the #vstip hashtag so we can all easily find them.

The post Visual Studio tips and tricks appeared first on The Visual Studio Blog.

OneDrive Personal Vault brings added security to your most important files and OneDrive gets additional storage options

Python in Visual Studio Code – June 2019 Release

$
0
0

We are pleased to announce that the June 2019 release of the Python Extension for Visual Studio Code is now available. You can download the Python extension from the Marketplace, or install it directly from the extension gallery in Visual Studio Code. If you already have the Python extension installed, you can also get the latest update by restarting Visual Studio Code. You can learn more about Python support in Visual Studio Code in the documentation.

In this release we made improvements that are listed in our changelog, closing a total of 70 issues including a plot viewer with the Python Interactive window, parallel tests with pytest, and indentation of run selection in the terminal.

Plot Viewer with the Python window

Plots are commonly used for data visualization. One of the top requested features for the Python Interactive window is to enable deeper inspection of the generated plots, e.g. zooming, panning, and exporting images. The June 2019 update included a brand-new Plot Viewer that can be used to manipulate any image plots, such as the popular matplotlib plots.

You can try it out by double-clicking on the plots or clicking on the “expand image” button that is displayed when you hover over plot images in the Python Interactive Window:

With the plot viewer, you can pan, zoom in/out, navigate through plots in the current session, and export plots to PDF, SVG, or PNG formats.

Parallel tests with pytest

We made enhancements to reliability of statistics displayed for tests run, in particular for running tests in parallel with pytest.

You can try out running tests in parallel with pytest by installing the pytest-xdist package and add “-n<number of CPUs>” to a configuration file. For example, for 4 CPUs you can create a pytest.ini file in the project folder and add the below content to it:

[pytest]
addopts=-n4

Now when you run and debug tests, they’ll be executed in parallel.

You can refer to our documentation to learn more about testing support in the Python extension.

Indentation of run selection in the terminal

A highly requested VS Code Python feature on our GitHub repository was to dedent code selections before sending it to the terminal when running the “Run Selection/Line in Python Terminal” command. Starting in this release, the command will send to the terminal a de-indentation of the selection, based on its first non-empty line.

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include:

  • Allow password for remote authentication with jupyter. (#3624)
  • Data Viewer now properly handles large data frames and supports filtering with expressions on numeric columns (greater than, less than, equals to) (#5469)
  • Show preview of imported notebook in the Python Interactive window. (#5675)
  • Add support for sub process debugging, when debugging tests. (#4525)
  • Added support for activation of conda environments in powershell. (#668)
  • Add ‘ctrl+enter’ as a keyboard shortcut for run current cell. (#5673)

Be sure to download the Python extension for Visual Studio Code now to try out the above improvements. If you run into any problems, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – June 2019 Release appeared first on Python.

Announcing the general availability of Azure premium files

$
0
0

Highly performant, fully managed file service in the cloud!

Today, we are excited to announce the general availability of Azure premium files for customers optimizing their cloud-based file shares on Azure. Premium files offers a higher level of performance built on solid-state drives (SSD) for fully managed file services in Azure.

Premium tier is optimized to deliver consistent performance for IO-intensive workloads that require high-throughput and low latency. Premium file shares store data on the latest SSDs, making them suitable for a wide variety of workloads like databases, persistent volumes for containers, home directories, content and collaboration repositories, media and analytics, high variable and batch workloads, and enterprise applications that are performance sensitive. Our existing standard tier continues to provide reliable performance at a low cost for workloads less sensitive to performance variability, and is well-suited for general purpose file storage, development/test, backups, and applications that do not require low latency.

Through our initial introduction and preview journey, we’ve heard from hundreds of our customers from different industries about their unique experiences. They’ve shared their learnings and success stories with us and have helped make premium file shares even better.

“Working with clients that have large amounts of data that is under FDA or HIPAA regulations, we always struggled in locating a good cloud storage solution that provided SMB access and high bandwidth… until Azure Files premium tier. When it comes to a secure cloud-based storage that offers high upload and download speeds for cloud and on-premises VM clients, Azure premium files definitely stands out.”

– Christian Manasseh, Chief Executive Officer, Mobius Logic

“The speeds are excellent. The I/O intensive actuarial CloudMaster software tasks ran more than 10 times faster in the Azure Batch solution using Azure Files premium tier. Our application has been run by our clients using 1000’s of cores and the Azure premium files has greatly decreased our run times.”

– Scott Bright, Manager Client Data Services, PolySystems

Below are the key benefits of the premium tier. If you’re looking for more technical details, read the previous blog post “Premium files redefine limits for Azure Files.”

Performant, dynamic, and flexible

With premium tier, performance is what you define. Premium file shares’ performance can instantly scale up and down to fit your workload performance characteristics. Premium file shares can massively scale up to 100 TiB capacity and 100K IOPS with a target total throughput of 10 GiB/s. Not only do premium shares include the ability to dynamically tune performance, but also offer bursting capability to meet highly variable workload requirements with short peak periods of intense IOPS.

"We recently migrated our retail POS microservices to Azure Kubernetes Service with premium files. Our experience has been simply amazing - premium files permitted us to securely deploy our 1.2K performant Firebird databases. No problem with size or performance, just adapt the size of the premium file share to instantly scale. It improved our business agility, much needed to serve our rapidly growing customer base across multiple retail chains in France."

– Arnaud Le Roy, Chief Technology Officer, Menlog

We partnered with our internal Azure SQL and Microsoft Power BI teams to build solutions on premium files. As a result, Azure Database for PostgreSQL and Azure Database for MySQL recently opened a preview of increased scale of 16 TiB databases with 20,000 IOPS powered by premium files. Microsoft Power BI announced a powerful 20 times faster enhanced dataflows compute engine preview built upon Azure Files premium tier.

Global availability with predictable cost

Azure Files premium tier is currently available in 19 Azure regions globally. We are continually expanding regional coverage. You can check the Azure region availability page for the latest information.

Premium tier provides the most cost-effective way to create highly-performant and highly-available file shares in Azure. Pricing is simple and cost is predictable–you only pay a single price per provisioned GiB. Refer to the pricing page for additional details.

Seamless Azure experience

Customers receive all features of Azure Files in this new offering, including snapshot/restore, Azure Kubernetes Service and Azure Backup integration, monitoring, hybrid support via Azure File Sync, Azure portal, PowerShell/CLI/Cloud Shell, AzCopy, Azure Storage Explorer support, and the list goes on. Developers can leverage their existing code and skills to migrate applications using familiar Azure Storage client libraries or Azure Files REST APIs. The opportunities for future integration are limitless. Reach out to us if you would like to see more.

With the availability of premium tier, we’re also enhancing the standard tier. To learn more, visit the onboarding instructions for the standard files 100 TiB preview.

Get started and share your experiences

It is simple and takes two minutes to get started with premium file shares. Please see detailed steps for how to create a premium file share.

Visit Azure Files premium tier documentation to learn more. As always, you can share your feedback and experiences on the Azure Storage forum or email us at azurefiles@microsoft.com. Post your ideas and suggestions about Azure Storage on our feedback forum.

Event-driven analytics with Azure Data Lake Storage Gen2

$
0
0

Most modern-day businesses employ analytics pipelines for real-time and batch processing. A common characteristic of these pipelines is that data arrives at irregular intervals from diverse sources. This adds complexity in terms of having to orchestrate the pipeline such that data gets processed in a timely fashion.

The answer to these challenges lies in coming up with a decoupled event-driven pipeline using serverless components that responds to changes in data as they occur.

An integral part of any analytics pipeline is the data lake. Azure Data Lake Storage Gen2 provides secure, cost effective, and scalable storage for the structured, semi-structured, and unstructured data arriving from diverse sources. Azure Data Lake Storage Gen2’s performance, global availability, and partner ecosystem make it the platform of choice for analytics customers and partners around the world. Next comes the event processing aspect. With Azure Event Grid, a fully managed event routing service, Azure Functions, a serverless compute engine, and Azure Logic Apps, a serverless workflow orchestration engine, it is easy to perform event-based processing and workflows responding to the events in real-time.

Today, we’re very excited to announce that Azure Data Lake Storage Gen2 integration with Azure Event Grid is in preview! This means that Azure Data Lake Storage Gen2 can now generate events that can be consumed by Event Grid and routed to subscribers with webhooks, Azure Event Hubs, Azure Functions, and Logic Apps as endpoints. With this capability, individual changes to files and directories in Azure Data Lake Storage Gen2 can automatically be captured and made available to data engineers for creating rich big data analytics platforms that use event-driven architectures.

Modern data warehouse

The diagram above shows a reference architecture for the modern data warehouse pipeline built on Azure Data Lake Storage Gen2 and Azure serverless components. Data from various sources lands in Azure Data Lake Storage Gen2 via Azure Data Factory and other data movement tools. Azure Data Lake Storage Gen2 generates events for new file creation, updates, renames, or deletes which are routed via Event Grid and Azure Function to Azure Databricks. A databricks job processes the file and writes the output back to Azure Data Lake Storage Gen2. When this happens, Azure Data Lake Storage Gen2 publishes a notification to Event Grid which invokes an Azure Function to copy data to Azure SQL Data Warehouse. Data is finally served via Azure Analysis Services and PowerBI.

The events that will be made available for Azure Data Lake Storage Gen2 are BlobCreated, BlobDeleted, BlobRenamed, DirectoryCreated, DirectoryDeleted, and DirectoryRenamed. Details on these events can be found in the documentation “Azure Event Grid event schema for Blob storage.”

Some key benefits include:

  • Seamless integration to automate workflows enables customers to build an event-driven pipeline in minutes.
  • Enable alerting with rapid reaction to creation, deletion, and renaming of files and directories. A myriad of scenarios would benefit from this – especially those associated with data governance and auditing. For example, alert and notify of all changes to high business impact data, set up email notifications for unexpected file deletions, as well as detect and act upon suspicious activity from an account.
  • Eliminate the complexity and expense of polling services and integrate events coming from your data lake with third-party applications using webhooks such as billing and ticketing systems.

Next steps

Azure Data Lake Storage Gen2 Integration with Azure Event Grid is now available in West Central US and West US 2. Subscribing to Azure Data Lake Storage Gen2 events works the same as it does for Azure Storage accounts. To learn more, see the documentation “Reacting to Blob storage events.” We would love to hear more about your experiences with the preview and get your feedback at ADLSGen2QA@microsoft.com.

Auditing for Azure DevOps is now in Public Preview

$
0
0

We’re excited to announce that Auditing for Azure DevOps is now available for all organizations as a Public Preview! As Azure DevOps keeps growing and is adopted by enterprises, our customers have been demanding for the ability to monitor activities and changes throughout their organizations.

When an auditable event occurs, a log entry is recorded. These events may occur in any portion of Azure DevOps; some examples of auditable events include: Git repository creations, permission changes, resource deletions, code downloads, accessing the auditing feature, and much more.

The audit events include information such as who caused the event to be logged and their IP, what happened, and other useful details that can help you answer the who, what, when, and where questions.

Audit Log Page

We will be working over the next few months on enhancing auditing with new features. In the next quarter we’ll be working on a streaming feature which will allow you to send your logs to first and third party Security Incident and Event Management (SIEM) tools. The use of these tools along with auditing will give you more transparency into your workforce and allow for anomaly detection, trend visualization, and more!

The auditing feature can be found under the Organizations settings. For more information, see our documentation.

We’d love to hear your feedback as we continue to move towards making this feature generally available! You can share your thoughts directly with the product team using @AzureDevOps, Developer Community, or comment on this post.

The post Auditing for Azure DevOps is now in Public Preview appeared first on Azure DevOps Blog.

Solving the problem of duplicate records in healthcare

$
0
0

As the U.S. healthcare system continues to transition away from paper to more a digitized ecosystem, the ability to link all of an individual’s medical data together correctly becomes increasingly challenging. Patients move, marry, divorce, change names and visit multiple providers throughout their lifetime, with each visit creating new records, and the potential for inconsistent or duplicate information grows. Duplicate medical records often occur as a result of multiple name variations, data entry errors, and lack of interoperability—or communication—between systems. Poor patient identification and duplicate records in turn lead to diagnosis errors, redundant medical tests, skewed reporting and analytics, and billing inaccuracies.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we will describe how one Microsoft partner, Nextgate, uses Azure to solve a unique problem.

Patient matching

The process of reconciling electronic health records is called “patient matching,” and it is a major obstacle to improving the quality of care coordination, and patient safety. Further, duplicate records are financially crippling, costing the average hospital $1.5 million and our nation’s healthcare system over $6 billion annually. As data sharing matures and the industry pivots toward value, an enterprise view of patient information is essential for informed clinical-decision making, effective episodic care, and a seamless patient-provider experience during every encounter.

As more data is generated and more applications are introduced into the health IT environment, today’s organizations must engage in more comprehensive patient matching approaches.

The puzzle of disjointed electronic health records

While electronic health records (EHRs) have become commonplace, the disjointed, competitive nature of IT systems contributes to a proliferation of siloed, disconnected information. Many EHR systems make sharing data arduous, even in a single-system electronic medical record environment. Further, master patient indexes (MPI) within EHR systems were designed for a single vendor-based environment and lack the sophisticated algorithms for linking data across various settings of care and disparate systems. When sent downstream, duplicate and disjointed patient demographics trigger further harm including increased waste and inefficiencies, suboptimal outcomes, and lost revenue. Without common technical standards in place, EHR systems continue to collect information in various formats that only serve to exacerbate the issue of duplicate record creation.

Solution

NextGate’s Enterprise Master Patient Index (EMPI) platform is a significant step towards improving a health system’s data management and governance framework. This solution manages patient identities for more than two-thirds of the U.S. population, and one-third of the U.K. population. It empowers clinicians and their organizations to make informed, life-saving decisions by seamlessly linking medical records from any given system and reconciling data discrepancies across multiple sites of care. The automated identity matching platform uses both probabilistic and deterministic matching algorithms to account for minor variations in patient data to generate a single best record that follows the patient throughout the care journey.

A graphic showing the system that NextGate's Enterprise Master Patient Index uses to create and mantain patient records.

Benefits

  • Enhanced clinical decision-making.
  • Improved patient safety (or reduced medical errors.)
  • Decreased number of unnecessary or duplicate testing/procedures.
  • Improved interoperability and data exchange.
  • Trusted and reliable data quality.
  • Reduced number of denied claims and other reimbursement delays.
  • Improved administrative efficiencies.
  • Higher patient and provider satisfaction.

Azure services

  • Azure Security Center reinforces the security posture of the NextGate solution against threats, and provides recommendations to harden the security.
  • Azure Monitor provides telemetry data about the NextGate application to ensure its health.
  • Azure Virtual Machines provide compute power; enabling auto-scaling and supporting Linux and open source services
  • Azure SQL Database and Azure Database for PostgreSQL enable NextGate solutions to easily scale with more compute power (scale-up) or more database units (scale-out.)

Next steps

  • To find out more about this solution, go to Nextgate EMPI and click Contact me.
  • To see more about Azure in the healthcare industry see Azure for health.

New PCI DSS Azure Blueprint makes compliance simpler

$
0
0

I’m excited to announce our second Azure Blueprint for an important compliance standard with the release of the PCI-DSS v3.2.1 blueprint. The new blueprint maps a core set of policies for Payment Card Industry (PCI) Data Security Standards (DSS) compliance to any Azure deployed architecture, allowing businesses such as retailers to quickly create new environments with compliance built in to the Azure infrastructure.

Azure Blueprints is a free service that enables customers to define a repeatable set of Azure resources that implement and adhere to standards, patterns, and requirements. Azure Blueprints allow customers to set up governed Azure environments that can scale to support production implementations for large-scale migrations.

Azure Blueprint configuration screen.

Azure Blueprints is another reason why Azure is a strong platform for compliance, with the industry’s broadest and deepest portfolio of 91 compliance offerings. Azure is built using some of the most rigorous security and compliance standards in the world, and includes multi-layered security provided by Microsoft across physical datacenters, infrastructure, and operations. Azure is also built for the specific compliance needs of key industries, including over 50 compliance offerings specifically for the retail, health, government, finance, education, manufacturing, and media industries.

Compliance with regulations and standards such as ISO 27001, FedRAMP and SOC is increasingly necessary for all types of organizations, making control mappings to compliance standards a natural application for Azure Blueprints. Azure customers, particularly those in regulated industries, have expressed strong interest in compliance blueprints to help ease their compliance burdens. In March, we announced the ISO 27001 Shared Services blueprint sample which maps a set of foundational Azure infrastructure, such as virtual networks and policies, to specific ISO controls.

The PCI DSS is a global information security standard designed to prevent fraud through increased control of credit card data. Organizations that accept payments from credit cards must follow PCI DSS standards if they accept payment cards from the five major credit card brands. Compliance with PCI DSS is also required for any organization that stores, processes, or transmits payment and cardholder data.

The PCI-DSS v3.2.1 blueprint includes mappings to important PCI DSS controls, including:

  • Segregation of duties. Manage subscription owner permissions.
  • Access to networks and network services. Implement role-based access control (RBAC) to manage who has access to Azure resources.
  • Management of secret authentication information of users. Audit accounts that don't have multi-factor authentication enabled.
  • Review of user access rights. Audit accounts that should be prioritized for review, including depreciated accounts and external accounts with elevated permissions.
  • Removal or adjustment of access rights. Audit deprecated accounts with owner permissions on a subscription.
  • Secure log-on procedures. Audit accounts that don't have multi-factor authentication enabled.
  • Password management system. Enforce strong passwords.
  • Policy on the use of cryptographic controls. Enforce specific cryptographic controls and audit use of weak cryptographic settings.
  • Event and operator logging. Diagnostic logs provide insight into operations that were performed within Azure resources.
  • Administrator and operator logs. Ensure system events are logged.
  • Management of technical vulnerabilities. Monitor missing system updates, operating system vulnerabilities, SQL vulnerabilities, and virtual machine vulnerabilities in Azure Security Center.
  • Network controls. Manage and control networks and monitor network security groups with permissive rules.
  • Information transfer policies and procedures. Ensure information transfer with Azure services is secure.

We are committed to helping our customers leverage Azure in a secure and compliant manner. Over the next few months we will release new built-in blueprints for HITRUST, UK National Health Service (NHS) Information Governance (IG) Toolkit, FedRAMP, and Center for Internet Security (CIS) Benchmark. If you would like to participate in any early previews please sign up with this form, or if you have a suggestion for a compliance blueprint, please share it via the Azure Governance Feedback Forum.

Learn more about the Azure PCI-DSS v3.2.1 blueprint in our documentation.

Azure Blockchain Workbench 1.7.0 integration with Azure Blockchain Service

$
0
0

We’re excited to share the release of Microsoft Azure Blockchain Workbench 1.7.0, which along with our new Azure Blockchain Service, can further enhance your blockchain development and projects. You can deploy a new instance of Blockchain Workbench through the Azure portal or upgrade your existing deployments to 1.7.0 using the upgrade script

This update includes the following improvements:

Integration with Azure Blockchain Service

With the Azure Blockchain Service now in preview, you can develop directly with Blockchain Workbench on Azure Blockchain Service as the underlying blockchain. For those of you who have been on this blockchain journey with Microsoft, there are now templates in Azure which make it faster to configure and deploy a private blockchain network, but it’s still up to you to maintain and run your blockchain nodes, including upgrading to new versions, installing security patches, and more. Azure Blockchain Service simplifies the maintenance of the underlying blockchain network by running a fully managed blockchain node for you.

 

Azure Blockchain Service overview page

 

Blockchain Workbench helps with building the scaffolding needed on top of a blockchain network to quickly iterate and develop blockchain solutions. Workbench 1.7.0 enables you to easily deploy the Azure Blockchain Service directly with Workbench. To deploy Workbench from the Azure Marketplace, navigate to the Advanced settings blade and select Create new blockchain network under Blockchain settings.

 

Azure Blockchain Service settings from Azure Blockchain Workbench deployment experience

Selecting this option will automatically deploy an Azure Blockchain Service node for you. Note that if you rotate the primary API key on the primary transaction node on your Azure Blockchain Service, you need to change the key of the configured RPC endpoint on Blockchain Workbench. Update the Key Vault with the new key and reboot the VMs.

Enhanced compatibility with Quorum

One of the highly requested features from customers is adding compatibility for additional blockchain network protocols. In previous releases of Blockchain Workbench, the default blockchain network that is configured is an Ethereum Proof-of-Authority (PoA) blockchain network. With Blockchain Workbench 1.7.0, we have added compatibility with the Quorum blockchain network.

For customers who are looking to build blockchain applications on top of Quorum, you can now develop and build your Quorum based applications directly with Blockchain Workbench.

You can stay up to date on Azure Blockchain Service by following the team on Twitter @MSFTBlockchain. Please use the Blockchain UserVoice to provide feedback and suggest features and ideas. Your input is helping make this a great service. We look forward to hearing from you.

A solution to manage policy administration from end to end

$
0
0

Legacy systems can be a nightmare for any business to maintain. In the insurance industry, carriers struggle not only to maintain these systems but to modify and extend them to support new business initiatives. The insurance business is complex, every state and nation has its own unique set of rules, regulations, and demographics. Creating new products such as an automobile policy has traditionally required the coordination of many different processes, systems, and people. These monolithic systems traditionally used to create new products are inflexible and creating a new product can be an expensive proposition.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner, Sunlight Solutions, uses Azure to solve a unique problem.

Monolithic systems and their problems

Insurers have long been restricted by complex digital ecosystems created by single-service solutions. Those tasked with maintaining such legacy, monolithic systems struggle as the system ages and becomes more unwieldy. Upgrades and enhancements often require significant new development, large teams, and long-term planning which are expensive, unrealistic, and a drain on morale. Worse, they restrict businesses from pursuing new and exciting opportunities.

A flexible but dedicated solution

An alternative is a single solution provider that is well versed in the insurance business but able to create a dedicated and flexible solution, one that overcomes the problems of a monolith. Sunlight is such a provider. It allows insurance carriers to leverage the benefits of receiving end-to-end insurance administration functionality from a single vendor. At the same time, their solution provides greater flexibility, speed-to-market, and fewer relationships to manage with lower integration costs.

Sunlight’s solution is a single system which manages end-to-end functionality across policy, billing, claims, forms management, customer/producer CRM, reporting and much more. According to Sunlight:

“We are highly flexible, managed through configuration rather than development. This allows for rapid speed to market for the initial deployment and complete flexibility when you need to make changes or support new business initiatives. Our efficient host and continuous delivery models address many of the industry’s largest challenges with respect to managing the cost and time associated with implementation, upgrades, and product maintenance.”

In order to achieve their goals of being quick but pliable, the architecture of the solution is literally a mixture of static and dynamic components. Static components are fields that do not change. Dynamic components such as lists populate at run time. This is conveyed in the graphic below, the solution uses static elements but lets users configure with dynamic parts as needed. The result is a faster cycle that maintains familiarity but allows a variety of data types.

Diagram image of Sunlight's solution using static elements and letting user configure with dynamic parts

In the figure above, data appears depending on the product. When products are acquired, for example through mergers, the static data can be mapped. If a tab exists for the product, it appears. For example, “benefits” and “deductibles” are not a part of every product.

Benefits

In brief, here are the key gains made by using Sunlight:

  • End-to-end functionality: Supports all products/coverages/lines of business
  • Cloud-based and accessible anywhere
  • Supports multiple languages and currencies
  • Globally configurable for international taxes and regional regulatory controls
  • Highly configurable by non-IT personnel
  • Reasonable price-point

Azure services

  • Azure Virtual Machines are used to implement the entire project life cycle quickly.
  • Azure Security Center provides a complete and dynamic infrastructure that continuously improves on its own.
  • Azure Site Recovery plans are simple to implement for our production layer.
  • Azure Functions is utilized in order to quickly replicate environments.
  • Azure Storage is used to keep the application light with a range of storage options for increased access time based on the storage type.

Next steps

To learn more about other industry solutions, go to the Azure for insurance page. To find more details about this solution, go to Sunlight Enterprise on the Azure Marketplace and select Contact me.

Leveraging complex data to build advanced search applications with Azure Search

$
0
0

Data is rarely simple. Not every piece of data we have can fit nicely into a single Excel worksheet of rows and columns. Data has many diverse relationships such as the multiple locations and phone numbers for a single customer or multiple authors and genres of a single book. Of course, relationships typically are even more complex than this, and as we start to leverage AI to understand our data the additional learnings we get only add to the complexity of relationships. For that reason, expecting customers to have to flatten the data so it can be searched and explored is often unrealistic. We heard this often and it quickly became our number one most requested Azure Search feature. Because of this we were excited to announce the general availability of complex types support in Azure Search. In this post, I want to take some time to explain what complex types adds to Azure Search and the kinds of things you can build using this capability. 

Azure Search is a platform as a service that helps developers create their own cloud search solutions.

What is complex data?

Complex data consists of data that includes hierarchical or nested substructures that do not break down neatly into a tabular rowset. For example a book with multiple authors, where each author can have multiple attributes, can’t be represented as a single row of data unless there is a way to model the authors as a collection of objects. Complex types provide this capability, and they can be used when the data cannot be modeled in simple field structures such as strings or integers.

Complex types applicability

At Microsoft Build 2019,  we demonstrated how complex types could be leveraged to build out an effective search application. In the session we looked at the Travel Stack Exchange site, one of the many online communities supported by StackExchange.

The StackExchange data was modeled in a JSON structure to allow easy ingestion it into Azure Search. If we look at the first post made to this site and focus on the first few fields, we see that all of them can be modeled using simple datatypes, including tags which can be modeled as a collection, or array of strings.

{
   "id": "1",
    "CreationDate": "2011-06-21T20:19:34.73",
    "Score": 8,
    "ViewCount": 462,
    "BodyHTML": "<p>My fiancée and I are looking for a good Caribbean cruise in October and were wondering which
    "Body": "my fiancée and i are looking for a good caribbean cruise in october and were wondering which islands
    "OwnerUserId": 9,
    "LastEditorUserId": 101,
    "LastEditDate": "2011-12-28T21:36:43.91",
    "LastActivityDate": "2012-05-24T14:52:14.76",
    "Title": "What are some Caribbean cruises for October?",
    "Tags": [
        "caribbean",
        "cruising",
        "vacations"
    ],
    "AnswerCount": 4,
    "CommentCount": 4,
    "CloseDate": "0001-01-01T00:00:00",​

However, as we look further down this dataset we see that the data quickly gets more complex and cannot be mapped into a flat structure. For example, there can be numerous comments and answers associated with a single document.  Even votes is defined here as a complex type (although technically it could have been flattened, but that would add work to transform the data).

"CloseDate": "0001-01-01T00:00:00",
    "Comments": [
        {
            "Score": 0,
            "Text": "To help with the cruise line question: Where are you located? My wife and I live in New Orlea
            "CreationDate": "2011-06-21T20:25:14.257",
           "UserId": 12
        },
        {
            "Score": 0,
            "Text": "Toronto, Ontario. We can fly out of anywhere though.",
            "CreationDate": "2011-06-21T20:27:35.3",
            "UserId": 9
        },
        {
            "Score": 3,
            "Text": ""Best" for what?  Please read [this page](http://travel.stackexchange.com/questions/how-to
            "UserId": 20
        },
        {
            "Score": 2,
            "Text": "What do you want out of a cruise? To relax on a boat? To visit islands? Culture? Adventure?
            "CreationDate": "2011-06-24T05:07:16.643",
            "UserId": 65
        }
    ],
    "Votes": {
        "UpVotes": 10,
        "DownVotes": 2
    },
    "Answers": [
        {
            "IsAcceptedAnswer": "True",
            "Body": "This is less than an answer, but more than a comment…nnA large percentage of your travel b
            "Score": 7,
            "CreationDate": "2011-06-24T05:12:01.133",
            "OwnerUserId": 74

All of this data is important to the search experience. For example, you might want to:

In fact, we could even improve on the existing StackExchange search interface by leveraging Cognitive Search to extract key phrases from the answers to supply potential phrases for autocomplete as the user types in the search box.

All of this is now possible because not only can you map this data to a complex structure, but the search queries can support this enhanced structure to help build out a better search experience.

Next Steps

If you would like to learn more about Azure Search complex types, please visit the documentation, or check out the video and associated code I made which digs into this Travel StackExchange data in more detail.

Simplify Your Code With Rocket Science: C++20’s Spaceship Operator

$
0
0

This post is part of a regular series of posts where the C++ product team here at Microsoft and other guests answer questions we have received from customers. The questions can be about anything C++ related: MSVC toolset, the standard language and library, the C++ standards committee, isocpp.org, CppCon, etc. Today’s post is by Cameron DaCamara.

C++20 adds a new operator, affectionately dubbed the “spaceship” operator: <=>. There was a post awhile back by our very own Simon Brand detailing some information regarding this new operator along with some conceptual information about what it is and does.  The goal of this post is to explore some concrete applications of this strange new operator and its associated counterpart, the operator== (yes it has been changed, for the better!), all while providing some guidelines for its use in everyday code.

Comparisons

It is not an uncommon thing to see code like the following:

struct IntWrapper {
  int value;
  constexpr IntWrapper(int value): value{value} { }
  bool operator==(const IntWrapper& rhs) const { return value == rhs.value; }
  bool operator!=(const IntWrapper& rhs) const { return !(*this == rhs);    }
  bool operator<(const IntWrapper& rhs)  const { return value < rhs.value;  }
  bool operator<=(const IntWrapper& rhs) const { return !(rhs < *this);     }
  bool operator>(const IntWrapper& rhs)  const { return rhs < *this;        }
  bool operator>=(const IntWrapper& rhs) const { return !(*this < rhs);     }
};

Note: eagle-eyed readers will notice this is actually even less verbose than it should be in pre-C++20 code because these functions should actually all be nonmember friends, more about that later.

That is a lot of boilerplate code to write just to make sure that my type is comparable to something of the same type. Well, OK, we deal with it for awhile. Then comes someone who writes this:

constexpr bool is_lt(const IntWrapper& a, const IntWrapper& b) {
  return a < b;
}
int main() {
  static_assert(is_lt(0, 1));
}

The first thing you will notice is that this program will not compile.


error C3615: constexpr function 'is_lt' cannot result in a constant expression

Ah! The problem is that we forgot constexpr on our comparison function, drat! So one goes and adds constexpr to all of the comparison operators. A few days later someone goes and adds a is_gt helper but notices all of the comparison operators do not have an exception specification and goes through the same tedious process of adding noexcept to each of the 5 overloads.

This is where C++20’s new spaceship operator steps in to help us out. Let’s see how the original IntWrapper can be written in a C++20 world:

#include <compare>
struct IntWrapper {
  int value;
  constexpr IntWrapper(int value): value{value} { }
  auto operator<=>(const IntWrapper&) const = default;
};

The first difference you may notice is the new inclusion of <compare>. The <compare> header is responsible for populating the compiler with all of the comparison category types necessary for the spaceship operator to return a type appropriate for our defaulted function. In the snippet above, the return type auto will be deduced to std::strong_ordering.

Not only did we remove 5 superfluous lines, but we don’t even have to define anything, the compiler does it for us! Our is_lt remains unchanged and just works while still being constexpr even though we didn’t explicitly specify that in our defaulted operator<=>. That’s well and good but some people may be scratching their heads as to why is_lt is allowed to still compile even though it does not even use the spaceship operator at all. Let’s explore the answer to this question.

Rewriting Expressions

In C++20, the compiler is introduced to a new concept referred to “rewritten” expressions. The spaceship operator, along with operator==, are among the first two candidates subject to rewritten expressions. For a more concrete example of expression rewriting, let us break down the example provided in is_lt.

During overload resolution the compiler is going to select from a set of viable candidates, all of which match the operator we are looking for. The candidate gathering process is changed very slightly for the case of relational and equivalency operations where the compiler must also gather special rewritten and synthesized candidates ([over.match.oper]/3.4).

For our expression a < b the standard states that we can search the type of a for an operator<=> or a namespace scope function operator<=> which accepts its type. So the compiler does and it finds that, in fact, a‘s type does contain IntWrapper::operator<=>. The compiler is then allowed to use that operator and rewrite the expression a < b as (a <=> b) < 0. That rewritten expression is then used as a candidate for normal overload resolution.

You may find yourself asking why this rewritten expression is valid and correct. The correctness of the expression actually stems from the semantics the spaceship operator provides. The <=> is a three-way comparison which implies that you get not just a binary result, but an ordering (in most cases) and if you have an ordering you can express that ordering in terms of any relational operations. A quick example, the expression 4 <=> 5 in C++20 will give you back the result std::strong_ordering::less. The std::strong_ordering::less result implies that 4 is not only different from 5 but it is strictly less than that value, this makes applying the operation (4 <=> 5) < 0 correct and exactly accurate to describe our result.

Using the information above the compiler can take any generalized relational operator (i.e. <, >, etc.) and rewrite it in terms of the spaceship operator. In the standard the rewritten expression is often referred to as (a <=> b) @ 0 where the @ represents any relational operation.

Synthesizing Expressions

Readers may have noticed the subtle mention of “synthesized” expressions above and they play a part in this operator rewriting process as well. Consider a different predicate function:

constexpr bool is_gt_42(const IntWrapper& a) {
  return 42 < a;
}

If we use our original definition for IntWrapper this code will not compile.

error C2677: binary '<': no global operator found which takes type 'const IntWrapper' (or there is no acceptable conversion)

This makes sense in pre-C++20 land, and the way to solve this problem would be to add some extra friend functions to IntWrapper which take a left-hand side of int. If you try to build that sample with a C++20 compiler and our C++20 definition of IntWrapper you might notice that it, again, “just works”—another head scratcher. Let’s examine why the code above is still allowed to compile in C++20.

During overload resolution the compiler will also gather what the standard refers to as “synthesized” candidates, or a rewritten expression with the order of the parameters reversed. In the example above the compiler will try to use the rewritten expression (42 <=> a) < 0 but it will find that there is no conversion from IntWrapper to int to satisfy the left-hand side so that rewritten expression is dropped. The compiler also conjures up the “synthesized” expression 0 < (a <=> 42) and finds that there is a conversion from int to IntWrapper through its converting constructor so this candidate is used.

The goal of the synthesized expressions are to avoid the mess of needing to write the boilerplate of friend functions to fill in gaps where your object could be converted from other types. Synthesized expressions are generalized to 0 @ (b <=> a).

More Complex Types

The compiler-generated spaceship operator doesn’t stop at single members of classes, it will generate a correct set of comparisons for all of the sub-objects within your types:

struct Basics {
  int i;
  char c;
  float f;
  double d;
  auto operator<=>(const Basics&) const = default;
};

struct Arrays {
  int ai[1];
  char ac[2];
  float af[3];
  double ad[2][2];
  auto operator<=>(const Arrays&) const = default;
};

struct Bases : Basics, Arrays {
  auto operator<=>(const Bases&) const = default;
};

int main() {
  constexpr Bases a = { { 0, 'c', 1.f, 1. },
                        { { 1 }, { 'a', 'b' }, { 1.f, 2.f, 3.f }, { { 1., 2. }, { 3., 4. } } } };
  constexpr Bases b = { { 0, 'c', 1.f, 1. },
                        { { 1 }, { 'a', 'b' }, { 1.f, 2.f, 3.f }, { { 1., 2. }, { 3., 4. } } } };
  static_assert(a == b);
  static_assert(!(a != b));
  static_assert(!(a < b));
  static_assert(a <= b);
  static_assert(!(a > b));
  static_assert(a >= b);
}

The compiler knows how to expand members of classes that are arrays into their lists of sub-objects and compare them recursively. Of course, if you wanted to write the bodies of these functions yourself you still get the benefit of the compiler rewriting expressions for you.

Looks Like a Duck, Swims Like a Duck, and Quacks Like operator==

Some very smart people on the standardization committee noticed that the spaceship operator will always perform a lexicographic comparison of elements no matter what. Unconditionally performing lexicographic comparisons can lead to inefficient generated code with the equality operator in particular.

The canonical example is comparing two strings. If you have the string "foobar" and you compare it to the string "foo" using == one would expect that operation to be nearly constant. The efficient string comparison algorithm is thus:

  • First compare the size of the two strings, if the sizes differ return false, otherwise
  • step through each element of the two strings in unison and compare until one differs or the end is reached, return the result.

Under spaceship operator rules we need to start with the deep comparison on each element first until we find the one that is different. In the our example of "foobar" and "foo" only when comparing 'b' to '' do you finally return false.

To combat this there was a paper, P1185R2 which details a way for the compiler to rewrite and generate operator== independently of the spaceship operator. Our IntWrapper could be written as follows:

#include <compare>
struct IntWrapper {
  int value;
  constexpr IntWrapper(int value): value{value} { }
  auto operator<=>(const IntWrapper&) const = default;
  bool operator==(const IntWrapper&) const = default;
};

Just one more step… however, there’s good news; you don’t actually need to write the code above, because simply writing auto operator<=>(const IntWrapper&) const = default is enough for the compiler to implicitly generate the separate—and more efficient—operator== for you!

The compiler applies a slightly altered “rewrite” rule specific to == and != wherein these operators are rewritten in terms of operator== and not operator<=>. This means that != also benefits from the optimization, too.

Old Code Won’t Break

At this point you might be thinking, OK if the compiler is allowed to perform this operator rewriting business what happens when I try to outsmart the compiler:

struct IntWrapper {
  int value;
  constexpr IntWrapper(int value): value{value} { }
  auto operator<=>(const IntWrapper&) const = default;
  bool operator<(const IntWrapper& rhs) const { return value < rhs.value; }
};
constexpr bool is_lt(const IntWrapper& a, const IntWrapper& b) {
  return a < b;
}

The answer here is, you didn’t. The overload resolution model in C++ has this arena where all of the candidates do battle, and in this specific battle we have 3 candidates:

    • IntWrapper::operator<(const IntWrapper& a, const IntWrapper& b)
    • IntWrapper::operator<=>(const IntWrapper& a, const IntWrapper& b)

(rewritten)

    • IntWrapper::operator<=>(const IntWrapper& b, const IntWrapper& a)

(synthesized)

If we accepted the overload resolution rules in C++17 the result of that call would have been ambiguous, but the C++20 overload resolution rules were changed to allow the compiler to resolve this situation to the most logical overload.

There is a phase of overload resolution where the compiler must perform a series tiebreakers. In C++20, there is a new tiebreaker that states we must prefer overloads that are not rewritten or synthesized, this makes our overload IntWrapper::operator< the best candidate and resolves the ambiguity. This same machinery prevents synthesized candidates from stomping on regular rewritten expressions.

Closing Thoughts

The spaceship operator is a welcomed addition to C++ and it is one of the features that will simplify and help you to write less code, and, sometimes, less is more. So buckle up with C++20’s spaceship operator!

We urge you to go out and try the spaceship operator, it’s available right now in Visual Studio 2019 under /std:c++latest! As a note, the changes introduced through P1185R2 will be available in Visual Studio 2019 version 16.2. Please keep in mind that the spaceship operator is part of C++20 and is subject to some changes up until such a time that C++20 is finalized.

As always, we welcome your feedback. Feel free to send any comments through e-mail at visualcpp@microsoft.com, through Twitter @visualc, or Facebook at Microsoft Visual Cpp. Also, feel free to follow me on Twitter @starfreakclone.

If you encounter other problems with MSVC in VS 2019 please let us know via the Report a Problem option, either from the installer or the Visual Studio IDE itself. For suggestions or bug reports, let us know through DevComm.

The post Simplify Your Code With Rocket Science: C++20’s Spaceship Operator appeared first on C++ Team Blog.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>