Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

MileIQ and Azure Event Hubs: Billions of miles streamed

$
0
0

This post was co-authored by Shubha Vijayasarathy, Program Manager, Azure Messaging (Event Hubs)

With billions of miles logged, MileIQ provides stress-free logging and accurate mileage reports for millions of drivers. Logging and reporting miles driven is a necessity for independent contractors to organizations with employees who need to drive for work. MileIQ automates mileage logging to create accurate records of miles driven, minimizing the effort and time needed with manual calculations. Real-time mileage tracking produces over a million location signal events per hour, requiring fast and resilient event processing that scales.

MileIQ leverages Apache Kafka to ingest massive streams of data:

  • Event processing: Events that demand time-consuming processing are put into Kafka, and multiple processors consume and process these asynchronously.
  • Communication among micro-services: Events are published by the event-owning micro-service on Kafka topics. The other micro-services, which are interested in these events, subscribe to these topics to consume the events.
  • Data Analytics: As all the important events are published on Kafka, the data analytics team subscribes to the topics it is interested in and pulls all the data it requires for data processing.

Growth Challenges

As with any successful venture, growth introduces operational challenges as infrastructure struggles to support the growing demand. In MileIQ’s case, the effort and resources required to maintain Apache Kafka clusters multiplied exponentially with adoption. A seemingly simple task, like modifying a topic’s retention configuration, now becomes an operational burden as the number of Kafka clusters scale to meet the increase in data.

Leveraging a managed service, enabled MileIQ to shift resources from operations and maintenance to focus on new ways to drive business impact. A couple reasons why the MileIQ team selected Azure Event Hubs for Kafka:

  • Fully managed platform as a service (PaaS): With little configuration or management overhead, Event Hubs for Kafka provides a PaaS Kafka experience without the need to manage, configure, or run Kafka clusters.
  • Supports multiple Kafka use-cases: Event Hubs for Apache Kafka provides support at the protocol level, enabling integration of existing Kafka applications with no code changes and minimal configuration change. MileIQ’s existing Kafka producers and consumers, as well as other streaming applications like Apache Kafka MirrorMaker and Apache Spark, integrated seamlessly with the Kafka-enabled Event Hub.
  • Deliver streaming data to Azure Blob storage: The Capture feature of Event Hubs automatically send data from Azure Event Hubs for Kafka to Blob storage. MileIQ uses the data in Blob storage for data analytics and backup.
  • Enterprise performance: The Dedicated-tier cluster offers single-tenant deployments with a guaranteed 99.99% SLA. MileIQ performance tests showed the Dedicated-tier cluster was able to consistently produce a throughput rate of 6,000 events per second.

* Testing based on one event at a time synchronously to address specific use-cases focused on consistency over throughput. Testing batching and produce asynchronously resulted in a much higher throughput.

Set up for success

As a result of migrating Apache Kafka to a managed service, MileIQ now has the infrastructure needed to support future growth.

“To sum up, our experience switching over to Azure Event Hubs Kafka has been excellent. To start with, the onboarding was straightforward, integration was seamless, and we continue to receive great help and support from the Azure Event Hubs Kafka team. In the near future, we look forward to the release of new features that the Azure Event Hubs Kafka team is working on – Geo Replication, Idempotent Producers, Kafka Streams, etc.”

“Migrating to Azure Event Hubs Kafka was a painless experience. Straightforward onboarding seamless integration, and support from the Event Hubs team every step of the way.  We’re excited to see what’s next and look forward to a continued partnership.” – MileIQ

Start streaming data

Data is valuable only when there is an easy way to process and get timely insights from data sources. Azure Event Hubs provides a fully managed distributed stream processing platform with low latency and seamless integration with Apache Kafka applications.

What are you waiting for? Time to get event-ing!

Enjoyed this blog? Follow us as we update the features list we will start supporting. Leave us your valuable feedback, questions, or comments below.

Happy event-ing!


How HSBC built its PayMe for Business app on Microsoft Azure

$
0
0

Bank-grade security, super-fast transactions, and analytics 

If you live in Asia or have ever traveled there, you’ve probably witnessed the dramatic impact that mobile technology has had on all aspects of day to day life. In Hong Kong in particular, most consumers now use a smart phone daily, presenting new opportunities for organizations to deliver content and services directly to their mobile devices.

As one of the world’s largest international banks, HSBC is building new services on the cloud to enable them to organize their data more efficiently, analyze it to understand their customers better, and make more core customer journeys and features available on mobile first.

HSBC’s retail and business banking teams in Hong Kong have combined the convenience afforded by smart phones with cloud services to allow “cashless” transactions where people can use their smart phone to perform payments digitally. Today, over one and a half million people use HSBC’s PayMe app to exchange money with people in their personal network for free. And businesses are using HSBC’s new PayMe for Business app, built natively on Azure, to collect payments instantly, with 98 percent of all transactions completed in 500 milliseconds or less. Additionally, the businesses can leverage powerful built-in intelligence on the app to improve their sales and operations.

On today’s Microsoft Mechanics episode of “How We Built it,” Alessio Basso, Chief Architect of PayMe from HSBC, explains the approach they took and why.

Microsoft Mechanics episode - HSBC's PayMe for Business app

Bank-grade security, faster time to delivery, dynamic scale and resiliency

The first decision Alessio and team made was to use fully managed services to allow them to go from ideation to a fully operational service in just a few months. Critical to their approach was adopting a microservices-based architecture with Azure Kubernetes Service and Azure Database for MySQL.

They designed each microservice to be independent, with their own instance of Azure managed services, including Azure Database for MySQL, Azure Event Hub, Azure Storage, Azure Key Vault for credentials and secrets management, and more. They architected for this level of isolation to strengthen security and overall application uptime, as shared dependencies are eliminated.

microservice

Each microservice can rapidly scale compute and database resources elastically and independently, based on demand. What’s more, Azure Database for MySQL, allows for the creation of read replicas to offload read-only and analytical queries without impacting payment transaction response times.

replicas

Also, from a security perspective, because each microservice runs within its own subnet inside of Azure Virtual Network, the team is able to isolate network and communications back and forth between Azure resources with service principals via Virtual Network service endpoints.

Fast and responsive analytics platform

At its core, HSBC’s PayMe is a social app that allows consumers to establish their personal networks, while facilitating the interactions and transactions with the people in their circle and business entities. In order to create more value for both businesses and consumers, Azure Cosmos DB is used for graph data modelled to store customer-merchant-transaction relationships.

Massive amounts of structured and unstructured data from Azure Database for MySQL, Event Hubs, and Storage are streamed and transformed. The team designed an internally developed data ingestion process, feeding an analytical model called S.L.I.M (simple, lightly, integrated model), optimized for analytics queries performance, as well as making data virtually available to the analytics platform, using Azure Databricks Delta’s unmanaged table capability.

virtualized-data

Then machine learning within their analytics platform built on Azure Databricks allows for the quick determination of patterns and relationships, as well as for the detection of anomalous activity.

With Azure, organizations can immediately take advantage of new opportunities to deliver content and services directly to mobile devices, including a next-level digital payment platform.

  • To learn more about how HSBC architected their cashless digital transaction platform, please watch the full episode.
  • Learn more about achieving microservice independence with your own instance of a Azure managed service like Azure Database for MySQL.

What’s new in Azure DevOps Sprint 154

$
0
0

Sprint 154 has just finished rolling out to all organisations and you can check out all the cool features in the release notes. Here are just some of the features that you can start using today.

Add a GitHub release as an artifact source

You can link your GitHub releases as artifact source in Azure DevOps release pipelines. This will let you consume the GitHub release as part of your deployments.

Create and embed work items from a wiki page

We heard from feedback, that wiki is used to capture brainstorming documents, planning documents, ideas on features, spec documents, meeting minutes etc. Now you can easily create features and user stories directly from a planning document without leaving the wiki page, quickly and easily, saving you time and reducing context switching.

Instant search for work items

Ever lose the link to a work item you recently opened and not know where to go to find it? Never fear – the search bar now provides a list of recently accessed work items and full support for search syntax so you can find any work item right inline.

Tip: You can invoke the search box by typing the keyboard shortcut “/”.

Live updates for work items

We’ve made it even easier to stay up to date on the latest changes with the introduction of live updates. Now, just like on the Kanban board, if you have a work item open and someone on your team makes a change, you will get those updates immediately without needing to refresh.

View parent work item as a column

Another longstanding paper cut in the product is the inability to view parent information as you reorder the backlog, so we’re excited to introduce parent work item as a configurable column. You can now change the order of your PBIs with the Feature information in view!

These are just the tip of the iceberg, and there are plenty more features that we’ve released in Sprint 154. Check out the full list of features for this sprint in the release notes.

The post What’s new in Azure DevOps Sprint 154 appeared first on Azure DevOps Blog.

Announcing TypeScript 3.6 Beta

$
0
0

Today we’re happy to announce the availability of TypeScript 3.6 Beta. This beta is intended to be a feature-complete version of TypeScript 3.6. In the coming weeks we’ll be working on bugs and improving performance and stability for our release candidate, and eventually the full release.

To get started using the beta, you can get it through NuGet, or use npm with the following command:

npm install -g typescript@beta

You can also get editor support by

Let’s explore what’s coming in 3.6!

Stricter Generators

TypeScript 3.6 introduces stricter checking for iterators and generator functions. In earlier versions, users of generators had no way to differentiate whether a value was yielded or returned from a generator.

function* foo() {
    if (Math.random() < 0.5) yield 100;
    return "Finished!"
}

let iter = foo();
let curr = iter.next();
if (curr.done) {
    // TypeScript 3.5 and prior thought this was a 'string | number'.
    // It should know it's 'string' since 'done' was 'true'!
    curr.value
}

Additionally, generators just assumed the type of yield was always any.

function* bar() {
    let x: { hello(): void } = yield;
    x.hello();
}

let iter = bar();
iter.next();
iter.next(123); // oops! runtime error!

In TypeScript 3.6, the checker now knows that the correct type for curr.value should be string in our first example, and will correctly error on our call to next() in our last example. This is thanks some changes in the Iterator and IteratorResult type declarations to include a few new type parameters, and to a new type that TypeScript uses to represent generators called the Generator type.

The Iterator type now allows users to specify the yielded type, the returned type, and the type that next can accept.

interface Iterator<T, TReturn = any, TNext = undefined> {
    // Takes either 0 or 1 arguments - doesn't accept 'undefined'
    next(...args: [] | [TNext]): IteratorResult<T, TReturn>;
    return?(value?: TReturn): IteratorResult<T, TReturn>;
    throw?(e?: any): IteratorResult<T, TReturn>;
}

Building on that work, the new Generator type is an Iterator that always has both the return and throw methods present, and is also iterable.

interface Generator<T = unknown, TReturn = any, TNext = unknown>
        extends Iterator<T, TReturn, TNext> {
    next(...args: [] | [TNext]): IteratorResult<T, TReturn>;
    return(value: TReturn): IteratorResult<T, TReturn>;
    throw(e: any): IteratorResult<T, TReturn>;
    [Symbol.iterator](): Generator<T, TReturn, TNext>;
}

To allow differentiation between returned values and yielded values, TypeScript 3.6 converts the IteratorResult type to a discriminated union type:

type IteratorResult<T, TReturn = any> = IteratorYieldResult<T> | IteratorReturnResult<TReturn>;

interface IteratorYieldResult<TYield> {
    done?: false;
    value: TYield;
}

interface IteratorReturnResult<TReturn> {
    done: true;
    value: TReturn;
}

In short, what this means is that you’ll be able to appropriately narrow down values from iterators when dealing with them directly.

To correctly represent the types that can be passed in to a generator from calls to next(), TypeScript 3.6 also infers certain uses of yield within the body of a generator function.

function* foo() {
    let x: string = yield;
    console.log(x.toUpperCase());
}

let x = foo();
x.next(); // first call to 'next' is always ignored
x.next(42); // error! 'number' is not assignable to 'string'

If you’d prefer to be explicit, you can also enforce the type of values that can be returned, yielded, and evaluated from yield expressions using an explicit return type. Below, next() can only be called with booleans, and depending on the value of done, value is either a string or a number.

/**
 * - yields numbers
 * - returns strings
 * - can be passed in booleans
 */
function* counter(): Generator<number, string, boolean> {
    let i = 0;
    while (true) {
        if (yield i++) {
            break;
        }
    }
    return "done!";
}

var iter = counter();
var curr = iter.next()
while (!curr.done) {
    console.log(curr.value);
    curr = iter.next(curr.value === 5)
}
console.log(curr.value.toUpperCase());

// prints:
//
// 0
// 1
// 2
// 3
// 4
// 5
// DONE!

For more details on the change, see the pull request here.

More Accurate Array Spread

In pre-ES2015 targets, the most faithful emit for constructs like for/of loops and array spreads can be a bit heavy. For this reason, TypeScript uses a simpler emit by default that only supports array types, and supports iterating on other types using the --downlevelIteration flag. Under this flag, the emitted code is more accurate, but is much larger.

--downlevelIteration being off by default works well since, by-and-large, most users targeting ES5 only plan to use iterative constructs with arrays. However, our emit that only supported arrays still had some observable differences in some edge cases.

For example, the following example

[...Array(5)]

is equivalent to the following array.

[undefined, undefined, undefined, undefined, undefined]

However, TypeScript would instead transform the original code into this code:

Array(5).slice();

This is slightly different. Array(5) produces an array with a length of 5, but with no defined property slots!

1 in [undefined, undefined, undefined] // true
1 in Array(3) // false

And when TypeScript calls slice(), it also creates an array with indices that haven’t been set.

This might seem a bit of an esoteric difference, but it turns out many users were running into this undesirable behavior. Instead of using slice() and built-ins, TypeScript 3.6 introduces a new __spreadArrays helper to accurately model what happens in ECMAScript 2015 in older targets outside of --downlevelIteration. __spreadArrays is also available in tslib (which is worth checking out if you’re looking for smaller bundle sizes).

For more information, see the relevant pull request.

Improved UX Around Promises

Promises are one of the most common ways to work with asynchronous data nowadays. Unfortunately, using a Promise-oriented API can often be confusing for users. TypeScript 3.6 introduces some improvements for when Promises are mis-handled.

For example, it’s often very common to forget to .then() or await the contents of a Promise before passing it to another function. TypeScript’s error messages are now specialized, and inform the user that perhaps they should consider using the await keyword.

interface User {
    name: string;
    age: number;
    location: string;
}

declare function getUserData(): Promise<User>;
declare function displayUser(user: User): void;

async function f() {
    displayUser(getUserData());
//              ~~~~~~~~~~~~~
// Argument of type 'Promise<User>' is not assignable to parameter of type 'User'.
//   ...
// Did you forget to use 'await'?
}

It’s also common to try to access a method before await-ing or .then()-ing a Promise. This is another example, among many others, where we’re able to do better.

async function getCuteAnimals() {
    fetch("https://reddit.com/r/aww.json")
        .json()
    //   ~~~~
    // Property 'json' does not exist on type 'Promise<Response>'.
    //
    // Did you forget to use 'await'?
}

The intent is that even if a user is not aware of await, at the very least, these messages provide some more context on where to go from here.

In the same vein of discoverability and making your life easier – apart from better error messages on Promises, we now also provide quick fixes in some cases as well.

Quick fixes being applied to add missing 'await' keywords.

For more details, see the originating issue, as well as the pull requests that link back to it.

Semicolon-Aware Code Edits

Editors like Visual Studio and Visual Studio Code can do automatically apply quick fixes, refactorings, and other transformations like automatically importing values from other modules. These transformations are powered by TypeScript, and older versions of TypeScript unconditionally added semicolons to the end of every statement; unfortunately, this disagreed with many users’ style guidelines, and many users were displeased with the editor inserting semicolons.

TypeScript is now smart enough to detect whether your file uses semicolons when applying these sorts of edits. If your file generally lacks semicolons, TypeScript won’t add one.

For more details, see the corresponding pull request.

Breaking Changes

String-Named Methods Named "constructor" Are Constructors

As per the ECMAScript specification, class declarations with methods named constructor are now constructor functions, regardless of whether they are declared using identifier names, or string names.

class C {
    "constructor"() {
        console.log("I am the constructor now.");
    }
}

A notable exception, and the workaround to this break, is using a computed property whose name evaluates to "constructor".

class D {
    ["constructor"]() {
        console.log("I'm not a constructor - just a plain method!");
    }
}

DOM Updates

Many declarations have been removed or changed within lib.dom.d.ts. This includes (but isn’t limited to) the following:

  • GlobalFetch is gone. Instead, use WindowOrWorkerGlobalScope
  • Certian non-standard properties on Navigator are gone.
  • The experimental-webgl context is gone. Instead, use webgl or webgl2.

If you believe a change has been made in error, please file an issue!

JSDoc Comments Don’t Merge

In JavaScript files, TypeScript will only consult immediately preceding JSDoc comments to figure out declared types.

/**
 * @param {string} arg
 */
/**
 * oh, hi, were you trying to type something?
 */
function whoWritesFunctionsLikeThis(arg) {
    // 'arg' has type 'any'
}

What’s Next?

TypeScript 3.6 is slated for the end of August, with a Release Candidate a few weeks prior. We hope you give the beta a shot and let us know how things work. If you have any suggestions or run into any problems, don’t be afraid to drop by the issue tracker and open up an issue!

Happy Hacking!

– Daniel Rosenwasser and the TypeScript Team

The post Announcing TypeScript 3.6 Beta appeared first on TypeScript.

Watch keynote presentations from the useR!2019 conference

$
0
0

The keynote presentations from last week's useR!2019 conference in Toulouse are now available for everyone to view on YouTube. (The regular talks were also recorded and video should follow soon, and slides for most talks are available for download now at the conference website.) Here are links to the videos, indexed to the start of each presentation:

All of the presentations are excellent, but if I had to choose one to watch first, it would be Julia Stewart Lowndes' presentation, which is an inspiring example of how R has enabled marine researchers to collaborate and learn from data (like a transponder-equipped squid!). 

The videos have been made available thanks to sponsorship by the R Consortium. If you're not familiar with the R Consortium, you can learn more in the short presentation below from Joe Rickert:

The R Consortium is funded by its member organizations, so if you'd like to see more of the above, consider asking your company to become a member.

YouTube (R Consortium): 2019 Keynotes

Vcpkg: 2019.06 Update

$
0
0

Vcpkg: 2019.06 Update

The 2019.06 update of vcpkg, a tool that helps you manage C and C++ libraries on Windows, Linux, and MacOS, is now available. This is the first time we’ve created a vcpkg release on our GitHub repository. This update is designed to bring you a summary of the new functionality and improvements made to vcpkg over about a month’s time. The 2019.06 update covers the month of June.

This release includes many new ports and triplet updates including overlay options, improvements for port contributors, and new documentation. For a full list of this release’s improvements, check out our release notes on GitHub.

Ports

There has been a substantial growth in vcpkg contributions over the past few months, with over 1,000 packages now available in the catalog. You can view the libraries available by either searching for a library name in the GitHub repo ports folder or using the vcpkg search command.

We added 44 new ports in the month of June. Some notable additions include: bdwgc, cJSON, greatest, immer, json-c, and zydis. These ports have 1K+ stars on their respective GitHub repos. You can view a full list of new ports in the [new ports section of our release notes]().

In addition to new ports, we updated 291 existing ports. A notable update to the release includes port ‘Homepages’.

Port Homepages

As part of our infrastructure work, you can now view the ‘Homepage’ for a port. This allows you to easily view a port’s official homepage via a link to the website. Let’s take the Abseil port for example. If you navigate to <vcpkg root>/ports/abseil/CONTROL, you will find the line “Homepage: https://github.com/abseil/abseil-cpp” which links to the official Abseil page.

Overlay Ports

The vcpkg command line interface allows you to easily search, install, and maintain your libraries. We added an –-overlay-ports option to allow you to override ports with alternate versions and create private ports.

Let’s look at an example where you are using OpenCV for your computer vision project. You would like to use vcpkg to acquire OpenCV and other packages. Your team is specifically using version 3.0 of OpenCV, but vcpkg offers version 3.4.3. Even though that version of OpenCV is not available in vcpkg, you can create a private port.

Let’s say you go ahead and create a private GitHub repo and check in the ports you want to preserve including OpenCV 3.0 and its specific dependent libraries that also may not be available in current vcpkg. You can then provide your team with the link to clone your private repo.

Locally, you create a custom ports directory and commit your changes:

~/vcpkg$ mkdir vcpkg-custom-ports
~/vcpkg$ cd vcpkg-custom-ports
~/vcpkg/vcpkg-custom-ports$ git init
~/vcpkg/vcpkg-custom-ports$ cp -r %VCPKG_ROOT%/ports/opencv .
~/vcpkg/vcpkg-custom-ports$ git add .
~/vcpkg/vcpkg-custom-ports$ git commit -m "[opencv] Add OpenCV 3.0 port"
~/vcpkg/vcpkg-custom-ports$ git remote add origin https://github.com/<My GitHub username>/vcpkg-custom-ports.git
~/vcpkg/vcpkg-custom-ports$ git push -u origin master

Now, you and your team can use version 3.0 of OpenCV for your projects with vcpkg using the following:

~/vcpkg/vcpkg-custom-ports$ git clone https://github.com/<My GitHub username>/vcpkg-custom-ports.git
~/vcpkg/vcpkg-custom-ports$ ./vcpkg update --overlay-ports=./vcpkg-custom-ports
~/vcpkg/vcpkg-custom-ports$ ./vcpkg upgrade --no-dry-run --overlay-ports=./vcpkg-custom-ports

Note that you may need to update vcpkg to use the most up-to-date command line options. You can update vcpkg on Windows via .bootstrap-vcpkg.bat or on macOS /Linux via ./bootstrap-vcpkg.sh.

This allows you to upgrade your packages and preserve the older version of OpenCV that your project requires.

As shown in the example above, you can use --overlay-ports with the vcpkg installvcpkg updatevcpkg upgradevcpkg export, and vcpkg depend-info commands. Learn more in our overlay-ports documentation.

Note that while overlay ports can help with overriding port versions and creating private ports, this is part of our ongoing work to improve the usability of vcpkg when it comes to versioning. Stay tuned for a future post on best practices for versioning with vcpkg!

Triplets

Vcpkg provides many triplets (target environments) by default. This past month, we focused on increasing the number of ports available on Linux and creating port improvements for Linux and the Windows Subsystem for Linux (WSL). We now have 755 ports available for Linux and we updated over 150 ports for Linux and WSL.

Here is a current list of ports per triplet:

Triplet Ports Available
x64-osx 823
x64-linux 755
x64-windows 1006
x86-windows 977
x64-windows-static 895
arm64-windows 654
x64-uwp 532
arm-uwp 504

 

Don’t see a triplet you’d like? You can easily add your own triplets. Details on adding triplets can be found in our documentation.

Overlay Triplets

As part of our vcpkg command line updates, we also added an --overlay-triplets option. This option is especially helpful if you have custom triplet needs. You can use the option, similar to --overlay-ports, to override triplets with custom specifications and create custom triplets.

For example, a subset of Linux users require fully dynamic libraries, whereas the x64-linux triplet only builds static libraries. A custom triplet file based on the x64-linux triplet can be created to build dynamic libraries. To solve this problem:

First, create a folder to contain your custom triplets:

~/vcpkg$ mkdir ../custom-triplets

Then, create the custom triplet file:

~/vcpkg$ cp ./triplets/x64-linux.cmake ../custom-triplets/x64-linux-dynamic.cmake

And modify the custom-triplets/x64-linux-dynamic.cmake file to:

set(VCPKG_TARGET_ARCHITECTURE x64) 
set(VCPKG_CRT_LINKAGE dynamic) 
set(VCPKG_LIBRARY_LINKAGE dynamic) 
set(VCPKG_CMAKE_SYSTEM_NAME Linux)

* Note the change of VCPKG_LIBRARY_LINKAGE from static to dynamic.

Finally, use your custom triplet by passing the --overlay-triplets option:

~/vcpkg$ vcpkg install opencv:x64-linux-dynamic --overlay-triplets=../custom-triplets

Improvements for Port Contributors

We also made improvements to the vcpkg infrastructure including a public CI system, check features, and a ‘Homepage’ field for ports.

CI System

We now have public CI tests through Azure DevOps pipelines which are run for all PRs to the vcpkg GitHub repo. The CI system allows contributors to get direct, automatic access to failure logs for PRs on Linux, Windows, and Mac within minutes. For example:

PR with passing and failing checksGitHub badge with passing and failing checks

The checks will still include badges to indicate pass/fail as shown by the ‘x’ or ‘check mark’.

And if a check fails, you can now drill into the details:

PR check details in Azure DevOps

Going further into Azure DevOps, you can get more information in the Summary tab such as downloading a zip file of all the failure logs along with a quick description of relevant changes:

Failed check details in Summary page of Azure DevOps

We hope the new CI system will improve your experience submitting PRs to vcpkg!

Check Features

Vcpkg_check_features is a new portfile function that checks if one or more features are a part of a package installation. In vcpkg we use features to enable optional capabilities offered by libraries. A user requests vcpkg to install. For example:

~/vcpkg$ vcpkg install opencv[cuda]

The install command enables the optional CUDA support for OpenCV.

Vcpkg_check_featuressimplifies the portfile creation process for vcpkg contributors by shortening the syntax needed in the CMake portfile script. Previously, you needed to specify which features are included in the port:

if(<feature> IN_LIST FEATURES)
   set(<var> ON)else()&nbsp;&nbsp; 
   set(<var> OFF)
endif()

Now, you can simply write the following:

vcpkg_check_features(<feature> <output_variable>)

Learn more about using vcpkg_check_features in your portfiles in the vcpkg_check_features documentation.

‘Homepage’ Field for Ports

We also added an optional ‘Homepage’ field to CONTROL. This means that CONTROL files may now contain a ‘Homepage’ field which links to the port’s official website. The Homepage field is designed to help you more easily find the origin/location of the ports you are using.

Documentation

We also updated our documentation to reflect these new changes. Check out the following new docs for more information on some of the updates outlined in this post in addition to a couple other areas:

Thank you

Thank you to everyone who created vcpkg! We now have 639 total contributors. This release, we’d like to thank the following 24 contributors who made code changes in June:

cenit martinmoene
coryan martin-s
driver1998 mloskot
eao197 myd7349
evpobr Neumann-A
Farwaykorse past-due
hkaiser pravic
jasjuang SuperWig
josuegomes tarcila
jumpinjackie TartanLlama
lebdron ThadHouse
MarkIanHolland UnaNancyOwen

 

Tell Us What You Think

Install vcpkg, give it a try, and let us know what you think. If you run into any issues, or have any suggestions, please report them on the Issues section of our GitHub repository.

We can be reached via the comments below or via email (vcpkg@microsoft.com). You can also find our team – and me – on Twitter @VisualC and @tara_msft.

The post Vcpkg: 2019.06 Update appeared first on C++ Team Blog.

Top Stories from the Microsoft DevOps Community – 2019.07.19

$
0
0

This week was packed with action at MSInspire and MSReady. It was remarkable to see the innovation in Satya Nadella’s corenote. We truly live in the future!

As a cherry on top, this community shared some amazing stories this week. I am so proud that our Azure DevOps Open Source program is enabling more and more teams around the world to provide value to the community!

Migrating the Test-Kitchen Project to Azure Pipelines
Testing has been a part of the developer lifecycle for decades, but until a few years ago no one realized that the same concepts could be applied in the infrastructure world as well. That is, until the Test-Kitchen was born. Test-Kitchen is an Open Source project that allows you to verify that your VMs will be in the expected state after you provision your infrastructure as code using tools such as Chef, Puppet or Ansible. Last month, Steven Murawski helped the Test Kitchen community move the project to Azure Pipelines, which allowed it to take advantage of the free unlimited Build minutes for Open Source projects, run the Builds on Windows, Linux and MacOS hosted agents, and even shorten the Build times. Thank you for trusting us with this important effort!

How Home Assistant is using Azure Pipelines to automate all the things.
It is stories like this that make me proud to be a part of this team. Paulus Schoutsen and his team at Home Assistant chose to leverage Azure Pipelines Open Source program to automate the continuous integration and the release creation process on self-hosted custom Linux agents, and worked with Microsoft to work out the kinks and increase the number of available free build agents. The article even has a link to the Azure DevOps dashboard where you can view the current Build status of Home Assistant components!

Building Azure DevOps Extension on Azure DevOps – Implementation
I recently had a number of people ask me about the process of creating custom Tasks for Azure Pipelines. With Azure DevOps, you can create and share custom tasks, dashboard widgets and work item customizations by publishing extensions to our Marketplace. You can also choose to make the extensions available only to your organization, or to the entire community. Justin Yoo comes out with a timely post describing the process of building a new Azure DevOps extension for Netlify. And, once you are done with the implementation part, you can follow Justin’s next guide for extension publisher registration

Azure DevOps & Teams Integration = Perfect Match
Another topic that comes up a lot lately is the integration between Azure DevOps and Microsoft Teams. This post from Steve Buchanan will walk you through using Azure DevOps Dashboards and Kanban Boards via Teams, as well as setting up Teams channel notifications for Repos and Pipelines activity. Give it a try!

Introduction to Azure DevOps
And if you are new to Azure DevOps and all of the above articles got you excited to explore the tool, here is an introductory overview from Digital Varys on all the product components and the value they provide.

Thank you, everyone! The stories you share underscore to me that the Azure DevOps team is truly living the Microsoft mission – to empower every person and every organization on the planet to achieve more.

If you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.07.19 appeared first on Azure DevOps Blog.

Announcing the Azure Boards app for Slack

$
0
0

Slack is one of the most popular communication platforms used in organizations, and many developers rely on it to build software collaboratively. Very often, conversations in Slack contain ideas and insights, and can help identify product defects. The same conversations then can continue in Azure Boards where development teams actually plan and manage their work.

Today, we’re announcing a new Azure Boards app for Slack to make it easier for development teams to work across Azure Boards and Slack, maintaining the context of each conversation.

With this app, users can create work items using a slash command, or use message actions to convert conversations in the channel into work items. Users can also set up and manage subscriptions to get notifications in their channel whenever work items are created or updated. Additionally, previews for work item URLs enable users to initiate discussions around work.
 

Create work item from your Slack channel

                     

Use message actions to create work items from conversations in the channel

       

Get notified when a work item is created

       

Monitor changes to work items

       

Use work item URLs to initiate discussions around work items

         

For more details about the app, please take a look at the documentation or go straight ahead and install the app.

We’re constantly at work to improve the app, and in the near future you’ll see new features coming along, including the ability to @mention users when a work item is assigned to them. Please give the app a try and send us your feedback using the /azboards feedback command in the app or on Developer Community.

The post Announcing the Azure Boards app for Slack appeared first on Azure DevOps Blog.


Digital transformation with legacy systems simplified

$
0
0

Intelligent insurance means improving operations, enabling revenue growth, and creating engaging experiences—which is the result of digital transformation. The cloud has arrived with an array of technical capabilities that can equip an existing business to move into the future. However insurance carriers face a harder road to transform business processes and IT infrastructures. Traditional policy and claim management solutions lack both cloud-era agility, and the modularity required to react quickly to market forces. And legacy systems cannot be decommissioned unless new systems are fully operational and tested, meaning some overlap between old and new.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

The need for efficient automation

The prevailing approach to upgrading enterprise software is to engage in large scale IT projects that may take years and significant cost to execute. Delaying may only increase the costs, especially with the burden of continuing (and increasing) compliance. But more importantly, delay results in a significant opportunity cost. Due to competition, insurers are under pressure to pursue lower costs overall, and especially in claim handling. New insurance technology also forces the need for new distribution models and to automate internal workflows and supply chains.

A platform built for transformation

The name of Codafication’s solution is Unity (not to be confused with the Unity game engine platform). Codafication calls Unity an ecosystem Platform-as-a-Service (ePaaS). It enables insurance carriers to accelerate their digital transformation through secure, bi-directional data integration with core and legacy systems. At the same time, the platform enables Codafication’s subscribers to use new cloud-native apps and services. The increase in connectivity means customers, staff and supply chains can integrate more easily and with greater efficiencies.

Unity seeks to address the changing expectations of insured customers without disruption to core policy and claim management functions within the enterprise. Codafication stresses a modular approach to implementing Unity. Their website provides a catalog of components such as project management, supply chain and resource planning, and financial control (and more).

In this graphic, potential inputs for the system include a wide variety of processes, from legacy core systems (expected) to robotic processes (a surprise). The output is equally versatile—dashboards and portals along with data lake and IoT workflow apps.

codafication
Insurers can take an iterative and modular approach to solving high value challenges rapidly. Unity provides all the tools required to accelerate digital transformation. Other noteworthy features include:

  • Custom extensions: use any programming language supported by Docker, in combination with Unity SDKs, to build custom frontend and backend solutions.
  • Off-the-shelf apps: plug in applications and services (from Codafication) designed for the insurance industry.
  • Scalability: cloud-native technology, underpinned by Kubernetes, can be hosted in the cloud or in a multi-cloud scenario, with a mix of Docker, serverless and on-premises options.
  • GraphQL API: leverage the power of a graph database to unlock data silos and find relationships between data stores from legacy systems. Integrate with cloud vendors, AI services and best-in-breed services through a single, secure, scalable and dynamic API.
  • Integrative technologies: create powerful custom IoT workflows with logic hooks, web hooks and real-time data subscriptions.

Benefits

  • Through Unity, organizations can interconnect everything and relate data on the fly. Developers can leverage legacy core systems, middleware, and robotics using a microservice architecture driven by a powerful service mesh and extensible framework.
  • Teams can leverage this infrastructure to deliver (in parallel) solutions into the platform and into the hands of their users. Insurance carriers will find new use cases (like data science uses, and AI) and develop apps rapidly, to deliver projects faster, for less cost and less risk.
  • Projects can be secured and reused across the infrastructure. This accelerates digital transformation projects without disrupting existing architecture and is the primary step to implementing modern cloud native technologies, such as AI and IoT.
  • The ‘modernize now, decommission later’ approach to core legacy systems lets an insurer compete and remain relevant against competitors while providing a longer runway for decommissioning aging legacy systems.

Azure services

Unity leverages the power of Microsoft Azure to provide secure private cloud capability across the globe including services such as:

Next steps

To learn more about other industry solutions, go to the Azure for insurance page.

To find out more about this solution, go to Unity Cloud and click Contact me.

What’s the difference between Azure Monitor and Azure Service Health?

$
0
0

It’s a question we often hear. After all, they’re similar and related services. Azure Monitor helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on. Azure Service Health helps you stay informed and take action when Azure service issues like outages and planned maintenance affect you. So what’s the difference?

Azure Monitor and Azure Service Health are complementary services that you will often use together when troubleshooting issues. Let’s go over a typical scenario. For example, let’s say your app is having a problem and experiencing downtime. Your users are complaining and reporting the issue. What’s wrong? You start troubleshooting.

Step 1: Assess the health of Azure with Azure Service Health

As you start troubleshooting, you first want to answer the question: is it me or is it Azure? To make sure Azure as a platform isn’t having any problems, you’ll want to check Azure Service Health. Better yet, you might already know about any issues affecting you if you have Azure Service Health alerts set up. More on this later.

You visit Azure Service Health in the Azure portal, where you check to see if there are any active issues, outages, planned maintenance events, or other health advisories affecting you.

An image showing Azure Service Health in the Azure portal.

At this stage, you might have been tempted to visit the Azure status page. Instead, we recommend you check Service Health, as we outlined above. Why? The status page only reports on major, widespread outages and doesn’t include any information about planned maintenance or other health advisories. To understand everything on the Azure side that might affect your availability, you need to visit Service Health.

So you’ve checked Service Health and determined there aren’t any known issues at the Azure level, which means the issue is likely on your side. What next?

Step 2: Review the health of your apps with Azure Monitor

You’ll want to dive into Azure Monitor to see if you can identify any issues on your end. Azure Monitor gives you a way to collect, analyze, and act on all the telemetry from your cloud and on-premises environments. These insights can help you maximize the availability and performance of your applications.

A graphic showing how Azure Monitor works.

Azure Monitor works by ingesting metrics and logs data from a wide variety of sources—application, OS, resources, and more—so you can visualize, analyze, and respond to what’s going on with your apps.

In our troubleshooting example, using Azure Monitor you might find there’s a lot of demand for your app early morning during the peak hours, and you’re running into capacity issues with your infrastructure (such as VMs or containers.) Now that you’ve determined the problem, you fix it by scaling up.

Well done, you’ve successfully used Service Health and Monitor to diagnose and solve the issue. But you’re not quite finished yet.

Step 3: Set up alerts for future events

To prevent this issue from happening again, you’ll want to use Monitor to set up log alerts and autoscaling to notify you and help you respond more quickly. At the same time, you should set up Service Health alerts so you’re aware of any Azure platform-level issues that might occur.

An image showing a Service Health alert being set up.

As you set up these alerts, you’ll find that one key similarity between Service Health and Azure Monitor is their alerting platform. They both use the same alert definition workflow and leverage the same action rules and groups. This means that you can set up an action group once and use it multiple times for different scenarios.

Learn more about Service Health alerts and recommended best practices in our blog “Three ways to get notified about Azure service issues.”

Recap: Is it Azure or is it me?

Azure Service Health and Azure Monitor answer different parts of the question “Is it Azure or is it me?” Service Health helps you assess the health of Azure, while Azure Monitor helps you determine if there are any issues on your end. Both services use the same alerting platform to keep you notified and informed of the availability and performance of your Azure workloads. Get started with Service Health and Azure Monitor today.

Azure solutions for financial services regulatory boundaries

$
0
0

Microsoft Azure is rapidly becoming the public cloud of choice for large financial services enterprises. Some of the biggest reasons Global Financial Services Institutions (GFIs) are choosing Azure to augment or replace on-premises application environments are:

  • The high level of security that the Azure cloud provides.
  • The exceptional control enterprises can have over compliance and security within their subscriptions.
  • The many features that Azure has for data governance and protection.
  • The long list of Global Regulatory Standards that the Azure cloud is compliant with. Please see the Microsoft Trust Center for more information.

Requirements for globally regulated Azure solutions

Azure is built to allow enterprises to control the flow of data between regions, and to control who has access to and can manage that data. Before we begin talking about solutions we need to define the requirements.

Examples of global regulation

Many governments and coalitions have developed laws and regulations for how data is stored, where it can be stored, and how it must be managed. Some examples of the more stringent and well know of these scenarios are:

  • European Union (EU)

General Data Protection Regulation (GDPR) is a legal framework that sets guidelines for the collection and processing of personal information from individuals who live in the EU.

  • Germany

Federal Data Protection Act is a law that deals with the conditions for processing employee data, and restrictions on the rights enjoyed by data subjects.

Data Localization and Management Law is a law that states that data collected about German citizens must be properly protected and encrypted, stored only on physical devices within Germany’s political boundaries, as well as managed only by German citizens.

  • China

Cyber Security Law (CSL) is a set of laws concerned with data localization, infrastructure, and management.

  • Canada

The Canadian Personal Information Protection and Electronic Documents Act (PIPEDA), protects consumer data across Canada, against misuse and disclosure.

Architecture and design requirements

Beyond the above-mentioned regulatory requirements there exist technical requirements specific to these scenarios. Cloud application and infrastructure architects are presented with the opportunity to develop solutions that provide business function while not violating international laws and regulations. The following are some of the requirements that need to be considered.

Globalization

A globalized business model provides access to multiple financial markets on a continuous basis each day. These markets differ in operations, language, culture, and of course regulation. Despite these differences, the services placed in the cloud need to be architected to be consistent across these markets to ensure manageability and customer experience.

Services and data management

Germany and China are prime examples of countries that only allow their citizens to manage data and the infrastructure on which that data resides.

Data localization

Many countries require at least some of the data sovereign to their country to remain physically within their borders. Regulated data cannot be transferred out of the country and data that does not meet regulatory requirements cannot be transferred into the country.

Reliability

Due to many of the above requirements, it becomes slightly more complicated to design for high availability, data-replication, and disaster recovery. For example, data must be replicated only to a location consistent with the country or regions standards and laws. Likewise, if a DR scenario is triggered it must be ensured that the applications, running in the DR site, are not crossing legal or standards boundaries to access information.

Authentication

Proper authentication to support role and identity based access controls must be in place to ensure that only intended and legally authorized individuals can access resources.

The Azure solution

A graphic showing Azure's solution to these global regulations.

Security components

Azure Active Directory (AAD)

Azure Active Directory (AAD) is the cloud-based version of Active Directory, so it takes advantage of the flexibility, scalability, and performance of the cloud while retaining the AD functionality that customers have grown used to. One of those functions is the ability to create sub-domains that can be managed and contain only those identities relevant to that country or region. AAD also provides functionality to differentiate between business-to-business relationships (B2B) and business-to-customer relationships (B2C). This differentiation can help clarify between customer access to their own data and management access.  

Azure Sentinel

Azure Sentinel is a scalable, cloud-native, security information event management (SIEM), and security orchestration automated response (SOAR) solution. Azure Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.

Azure Key Vault 

Azure Key Vault helps safeguard cryptographic keys and secrets that cloud applications and services use. Key Vault streamlines the key management process and enables you to maintain control of keys that access and encrypt your data. Developers can create keys for development and testing in minutes, and then migrate them to production keys. Security administrators can grant (and revoke) permission to keys, as needed.

Role based access control

Access management for cloud resources is a critical function for any organization that is using the cloud. Role based access control (RBAC) helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to. RBAC is an authorization system built on  Azure Resource Manager that provides fine-grained access management of Azure resources.

Azure Security Center

Azure Security Center is a unified infrastructure security management system that strengthens the security posture of your datacenters. It also provides advanced threat protection across your hybrid workloads in the cloud, whether they're in Azure or not, as well as on premises.

Governance components

Azure Blueprints

Azure Blueprints helps you deploy and update cloud environments in a repeatable manner using composable artifacts such as Azure Resource Manager templates to provision resources, role-based access controls, and policies. Blueprints can be used to deploy certain policies or controls for a given location or geographic region. Sample blueprints can be found in our GitHub repository.

Azure Policy

Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources, so those resources stay compliant with your corporate standards and service level agreements. For example, a policy can be set to allow only certain roles to access a group of resources. Another example is setting a policy that only certain sized resources are allowed in a given resource group. If a new resource is added to the group, the policy automatically applies to that entity. Sample Azure Policy configurations can be found in our GitHub repository.

Azure Virtual Datacenter Program (VDC)

The Azure Virtual Datacenter Program (VDC) is a collection of methods are archetypes designed to help enterprises standardize deployments and controls across application and workload environments. VDC utilizes multiple other Azure products including Azure Policy and Azure Blueprints. VDC samples can be found in our GitHub repository.

Infrastructure components

Azure Site Recovery (ASR)

Azure Site Recovery (ASR) provides data replication and disaster recovery services between Azure Regions, or between on-premise environments and Azure. ASR can be easily configured to replicate and failover between Azure regions within or outside country/geographic-region.

High availability

Virtual Machine (Infrastructure-as-a-Service IaaS) high availability can be achieved in multiple ways within the Azure cloud. Azure provides two native methods of failover:

  • An Azure Availability Set (AS) is a group of virtual machines that are deployed across fault domains and update domains within the same Azure Datacenter. Availability sets make sure that your application is not affected by single points of failure, like the network switch or the power unit of a rack of servers. Azure Availability Sets provide a service level agreement (SLA) of 99.95%.
  • An Availability Zone (AZ) is like an availability set in that the virtual machines are deployed across fault and update domains. The difference is that AZs provides a higher level of availability (SLA of 99.99%) by spreading the VMs across multiple Azure datacenters within the same region.

For Platform-as-a-Service (PaaS) high availability is built into the services, and need not be configured by the as the IaaS services above.

Data at rest encryption

Data at rest encryption is a common security requirement. In Azure, organizations can encrypt data at rest without the risk or cost of a custom key management solution. Organizations have the option of letting Azure completely manage encryption at rest. Additionally, organizations have various options to closely manage encryption or encryption keys.

Conclusion

The above capabilities are available across Azure’s industry leading regional coverage and extensive global network. Microsoft’s commitment to global regulatory compliance, data protection, data privacy, and security make Azure uniquely positioned to support GFSIs as they migrate complex mission critical workloads to the Cloud.

For more information on Azure compliance, please visit the Microsoft Trust Center compliance overview page.

Easing compliance for UK public and health sectors with new Azure Blueprints

$
0
0

Earlier this month we released our latest Azure Blueprint for a key compliance standard with the availability of the UK OFFICIAL blueprint for the Government-Cloud (G-Cloud) standard, and National Health Service (NHS) Information Governance of the United Kingdom. The new blueprints map a set of Azure policies to appropriate UK OFFICIAL and UK NHS controls for any Azure deployed architecture. This allows UK government agencies and partners, and UK health organizations to more easily create Azure environments that might store and process UK OFFICIAL government data and health data.

Azure Blueprints is a service that enables customers to define a repeatable set of Azure resources that implement and adhere to standards, patterns, and requirements. Azure Blueprints help customers to set up governed Azure environments that can scale to support production implementations for large-scale migrations.

The National Health Service is the national health system for England, which holds the population's health data. NHS Digital published its guidance on the use of public cloud services for storing confidential patient data, which provides a single standard that governs the collection, storage, and processing of patient data. Adherence with NHS helps protect the integrity and confidentiality of patient data against unauthorized access, loss, damage, and destruction.

G-Cloud is a UK government initiative to enable the adoption of cloud services by the UK public sector. The G-Cloud standard requires the implementation of 14 Cloud Security Principles. Every year, Microsoft submits evidence to attest that its in-scope cloud services comply with these principles, giving potential G-Cloud customers an overview of its risk environment. 

The UK OFFICIAL blueprint includes mappings to 8 of the 14 Cloud Security Principals:

1.  Data in transit protection. Assigns Azure Policy definitions to audit insecure connections to storage accounts and Redis cache.

2.  Data at rest protection (asset protection and resilience.) Assigns Azure Policy definitions that enforce specific cryptograph controls and audit the use of weak cryptographic settings. Also includes policies to restrict deployment of resources to UK location.

5.  Operational security. Assigns Azure Policy definitions that monitor missing endpoint protection, missing system updates, various vulnerabilities, unrestricted storage account, and whitelist activity.

9.  Secure user management and 10. Identity and authentication. Assigns several Azure Policy definitions to audit external accounts, accounts that do not have multi-factor authentication (MFA) enabled, virtual machines (VMs) without passwords, and other issues.

11. External interface protection. Assigns Azure Policy definitions that monitor unrestricted storage accounts. Also assigns a policy that enables adaptive application controls on VMs.

12.  Secure Service Administration. Assigns Azure Policy definitions related to privileged access rights for external accounts, Azure Active Directory authentication, MFA enablement, etc.

13.  Audit Information for Users. Assigns Azure Policy definitions that audit or enable various log settings on Azure resources.

Microsoft has prepared a guide to explain how Azure can help customers comply with the 14 Cloud Security Principals including 3, 4, 6, 7, 8, and 14. It can be found in our document 14 Cloud Security Controls for UK Cloud Using Microsoft Azure.

Compliance with regulations and standards such as ISO 27001, SASE-16, PCI DSS, and UK OFFICIAL is increasingly necessary for all types of organizations, making control mappings to compliance standards a natural application for Azure Blueprints. Azure customers, particularly those in regulated industries, have expressed strong interest in compliance blueprints to make it easier to meet their compliance obligations.

We are committed to helping our customers leverage Azure in a manner that helps improve security and compliance. We have now released Azure Blueprints for ISO 27001, PCI DSS, UK OFFICIAL, and UK NHS.  Over the next few months we will release new built-in blueprints for HITRUST, NIST SP 800-53, FedRAMP, and Center for Internet Security (CIS) Benchmark. If you would like to participate in any early previews please sign up with this form, or if you have a suggestion for a compliance blueprint please share it via the Azure Governance Feedback Forum.

Learn more about the UK OFFICIAL and UK NHS blueprints in our documentation Control mapping of the UK OFFICIAL and UK NHS blueprint samples.

Expanding the Azure Stack partner ecosystem

$
0
0

We continue to expand our ecosystem by partnering with independent software vendors (ISV) around the globe to deliver prepackaged software solutions to Azure Stack customers. As we are getting closer to our two-year anniversary, we are humbled by the trust and confidence bestowed by our partners in the Azure Stack platform. We would like to highlight some of the partnerships that we built during this journey.

Security

Thales now offers their CipherTrust Cloud Key Manager solution through the Azure Stack Marketplace that works with Azure and Azure Stack “Bring Your Own Key” (BYOK) APIs to enable such key control. CipherTrust Cloud Key Manager creates Azure-compatible keys from the Vormetric Data Security Manager that can offer up to FIPS 140-2 Level 3 protection. Customers can upload, manage, and revoke keys, as needed, to and from Azure Key Vaults running in Azure Stack or Azure, all from a single pane of glass.

Migration

Every organization has a unique journey to the cloud based on its history, business specifics, culture, and maybe most importantly their starting point. The journey to the cloud provides many options, features, functionalities, as well as opportunities to improve existing governance, operations, implement new ones, and even redesign the applications to take advantage of the cloud architectures.

When starting this migration, Azure Stack has a number of ISV partner solutions which would help you start with what you already have and progress to modernizing your applications as well as your operations. These are described in the “Azure Stack at its core is an Infrastructure-as-a-Service (IaaS) platform” blog series.

Data protection and disaster recovery

Veeam Backup and Replication 9.5 is now available through Azure Stack Marketplace making to possible to protect both Windows and Linux-based workloads running in the cloud from one centrally managed console. Refer to this document to learn about all data protection and disaster recovery partner solutions that support Azure Stack platform.

Networking

The VM-Series next-generation firewall from Palo Alto Networks allows customers to securely migrate their applications and data to Azure Stack, protecting them from known and unknown threats with application whitelisting and threat prevention policies. You can learn more about the VM-series next-generation firewall on Azure Stack.

Developer platform and tools

We continue to invest in open source technologies and Bitnami helps us make this possible with their extensive application catalog. Bitnami applications can be found on the Azure Stack Marketplace and can easily be launched directly on your Azure Stack platform. Learn more about Bitnami offerings.

With self-service simplicity, performance and scale, Iguazio Data Science Platform empowers developers to deploy AI apps faster on the edge. Iguazio Data Science Platform will be soon available through Azure Stack Marketplace.

IoT solutions

PTC's ThingWorx IIoT platform is designed for rapidly developing industrial IoT solutions, with the ability to scale securely from the cloud to the edge. ThingWorx runs on top of Microsoft Azure or Azure Stack, and leverages Azure PAAS to bring best in class IIoT solution to the manufacturing environment. Deploying ThingWorx on Azure Stack enables you to bring your cloud-based industry 4.0 solution to the factory floor. Experience on the show floor a demonstration of how ThingWorx Connect factory solution pulls data from real factory assets and makes insightful data available in prebuilt applications that can be customized and extended using ThingWorx Composer and Mashup builder.

Intelligent Edge devices

With the private preview of Iot Hub in Azure Stack, we are very excited to see our customers and partners creating solutions that perform data collection and AI inferencing in the field. Intel and its partners have created hardware kits that support IoT Edge and seamlessly integrate with Azure Stack. A few examples of such kits are the IEI Tank and up2, that enables the creation of computer vision solutions and deep learning inference using CPU, GPU, or an optional VPU. Those kits allow you to kick-start your targeted application development with a superior out-of-the-box experience, that includes pre-loaded software like the Intel Distribution of OpenVINO™.

View all partner solutions available on Azure Stack Marketplace

Java on Visual Studio Code July Update

$
0
0

Welcome to the July update of Java on Visual Studio Code!

In this update, we’d like to share a couple new refactoring features, semantic selection as well as some other enhancements we delivered during last few weeks.

Refactoring

Trigger rename after extract to variable/constant/method

After performing extract to variable/constant/method refactoring, more often than not, we would like to assign the result with a meaningful name. With this feature, you won’t need to perform a separate rename action anymore, all are streamlined in the single refactoring step.

Convert a local variable to a field.

Extract to field is also a very popular refactor. When selecting an expression, you can now use extract to field.

When selecting a variable declaration, it willconvert the variable to field.

Support for semantic selection

Smart Selection (a.k.a. Semantic Selection) is the new feature added by VS Code and now it understands Java code as well. With that, you are able to expand or shrink the selection range corresponding to the semantic info of the caret position in your code.

  • To expand the selection, use Shift + Alt + →  on Windows, and Ctrl + Shift + Command + → on Mac
  • To shrink the selection, use Shift + Alt + ← on Windows and Ctrl + Shift + Command + ← on Mac

Other enhancements

Maven
  • Maven projects use the latest Execution Environment when source/target is not yet supported.
  • For users who don’t have Maven installed locally, mvn can not be found to create a Maven project from archetypes. Maven extension now embeds a global maven wrapper in the extension, which serves as a fallback if no mvn or project-level mvnw found.
  • Support to select archetype version during Maven project creation.
  • Refresh explorer when config maven.pomfile.globPattern changes.
Gradle
  • Added additional Gradle preferences.
    • java.import.gradle.arguments: arguments to pass to Gradle
    • java.import.gradle.jvmArguments: JVM arguments to pass to Gradle
    • java.import.gradle.home: setting for GRADLE_HOME
Checkstyle
  • Support loading CheckStyle Configuration via Http URL.

Sign up

If you’d like to follow the latest of Java on VS Code, please provide your email with us using the form below. We will send out updates and tips every couple weeks and invite you to test our unreleased feature and provide feedback early on.

Try it out

Please don’t hesitate to give it a try! Your feedback and suggestions are very important to us and will help shape our product in future.

The post Java on Visual Studio Code July Update appeared first on The Visual Studio Blog.

Bring your GitHub collaborators to Azure DevOps

$
0
0

At Microsoft Build last spring we announced the ability for developers to sign in to Azure DevOps (and other Microsoft online services). The goal was to make it easier for developers to use Azure Pipelines and other services inside Azure DevOps, to build better applications, faster, and collaboratively.

Since then we’ve seen an overwhelmingly positive response to this capability. Thank you!

As a community, GitHub has many developers and teams. And we want each team to bring their members on GitHub to Azure DevOps for the scenarios that Azure DevOps fulfills.

Looking up your GitHub collaborators

Today we’re announcing the next step in the journey of making Azure DevOps and GitHub work great together. If you are an admin, sign into Azure DevOps with your GitHub identity, and you can now invite your GitHub team members. You can search and invite them from the Project homepage:

Inviting GitHub users from Azure DevOps project homepage

All you need to do is search by username or display name and with a few keystrokes you’ll see the list of matches from the GitHub community.

Furthermore, you can also do this from the user hub (Organization Settings -> Users). Use the ‘+Add new users’ capability to invite your GitHub team members:

Invite GitHub users from the Users page in your Azure DevOps Organization Settings

Adding your GitHub collaborators

Once you have selected your collaborators from the GitHub community, add them to your organization. Finally, share your Azure DevOps organization URL (https://dev.azure.com/{OrganizationName}) with your invited friends for them to sign-in and gain access to the project.

Pre-requisite

Since this feature is designed to enable better collaboration among GitHub users on Azure DevOps, it lights up only if you are signed in to Azure DevOps with your GitHub identity.

Enabling this capability

Create a new Azure DevOps organization with your GitHub username and this new capability is turned on by default.

For existing organizations, the organization admin can turn on this capability in the ‘Policies’ setting (https://dev.azure.com/{OrganizationName}/_settings/policy) of their Azure DevOps organization:

Enable inviting GitHub users to Azure DevOps organization

Docs & FAQs

For more information on this topic see our docs and in case of questions see our FAQs

Get started

Start exploring Azure DevOps now. Go to the Azure DevOps home page and click “Start free with GitHub” to get started.

The post Bring your GitHub collaborators to Azure DevOps appeared first on Azure DevOps Blog.


IoT sensors and wearables revolutionize patient care

$
0
0

When was the last time you or a loved one went to the doctor or hospital? Things have changed dramatically over the last few years, with kiosks to register, portals to track your health history, and texts reminding you about upcoming appointments.

These changes have made a difference in how we interact with our healthcare providers. But there are more changes, not on the horizon, but here today. It is estimated as many as 50 billion medical devices will connect to clinicians, health systems, patients, and to each other.

Cardiac patient monitoring improvements

Imagine that you or a family member have periodic symptoms of irregular heartbeats, an all too common medical disorder known as an arrhythmia. If persistent, an arrhythmia can cause blood to clot in the heart, significantly increasing the risk of a heart attack or stroke. If caught early, a clot or blockage can be contained or cleared away and stent can be put in to keep blood flowing normally. In the US, more than 1.8 million stents are implanted annually, along with countless other preventative cardiac procedures to treat the 28.2 million US adults with diagnosed heart disease. Following any cardiac related procedure, a patient is typically counseled about the importance of exercise and nutrition then sent home. Post procedure, it’s common to be concerned about another cardiac related event in the immediate days following discharge, but what if the patient could be proactive, as well as reactive, to cardiac disease? 

Peerbridge Health, a New York-based remote patient monitoring company has developed the Peerbridge Medical Design Excellence AwardsCor™ (Cor), an award-winning, multi-channel wearable electrocardiogram (ECG) to better assist physicians and their patients detect and treat irregular heart activity, expeditiously. Prescribed by a cardiologist, the Cor is an elegant wearable worn 24 hours a day for up to 7 days with the ability to record every single heartbeat. In the event of abnormal cardiac activity, the patient can transmit select ECG activity to the prescribing physician’s care team for analysis at the press of a button. This continuous recording with transmitted events provides an unparalleled “window” into the patient’s heart activity as they go about their daily activities. Finally, an ECG monitoring solution providing critical data transmission patient’s expect with modern medicine.

Frustrated by all the wires in the hospital, Peerbridge Health was founded by Dr. Angelo Acquista when he was caring for his father in the ECU in 2006. Instead of getting up and walking around, his father was covered in wires and sensors unable to move, like most cardiac patients. Shortly after this experience, Dr. Acquista started Peerbridge Health, determined to change how this chronic disease is managed. Today, Peerbridge is a leading-edge manufacturer of the Peerbridge Cor (pictured above), the smallest and lightest FDA-cleared, multi-channel, wireless ECG.Device

The Peerbridge team selected Microsoft’s Azure IoT platform for their cardiac monitor because they, “saw the Microsoft IoT platform as being a foundational ingredient to help them grow and scale.” Peerbridge CEO, Adrian Gilmore states, “Azure IoT not only provides the secure data stream that is needed to monitor patients, it also offers cloud tools enabling us to present data in formats physicians expect, making the entire system a real revolution in cardiac care.” He continued, “Our engagement at the Microsoft AI and IoT Insider Lab, was the perfect opportunity for us to sharpen our team’s digital strategy, ensuring we optimize the company’s cloud architecture, and take full advantage of the variety of data services Microsoft offers.”

Avoiding diabetic amputations

Another company, Sensoria Health, a Seattle-based company, has taken on the problem of diabetes-related amputations. Why diabetes? Well, the statistics from the American Podiatric Medical Association are staggering:

  • More than 400 million people have diabetes worldwide
  • 32 million people in the US have diabetes, costing more than 327 billion dollars
  • In the world today, a lower limb is lost to diabetes every 20 seconds
  • Cost in the US is estimated to be about 20 billion dollars

The typical progression to the amputation of a toe, foot, or more, always begins with a foot ulcer. The Medical bootteam at Sensoria asked themselves, “What can be done to expedite the healing of foot wounds to avoid amputations?” In response, Sensoria joined forces Optima to develop the Motus Smart powered by Sensoria®. It combines Sensoria® Core technologies, together with the clinically-tested Optima Molliter Offloading System, to take the pressure off the area of ulceration to improve blood circulation which is a critical factor to improve chance of healing.

Originally unveiled at the Consumer Electronics Show in January 2018, where it won Innovation Honoree Award, the Motus Smart, leverages Sensoria® Core to monitor activity and compliance, and is a clinically-proven and viable alternative to a total contact cast, and non-removable cam boot. The Sensoria® sensors work with a real-time app and alert system and an Azure based dashboard to inform patients, caregivers, and clinicians of non-compliant patients, allowing for easy and immediate intervention. The expensive and uncomfortable cast finally has an IoT, viable, and clinically-proven alternative with Motus Smart.

Simple UX focused on patient's adherence

Why did Sensoria choose Microsoft’s Azure IoT platform for their patient monitoring devices? Davide Vigano, co-founder and CEO of Sensoria, shares in this video the three reasons why they selected Azure:

  1. The richness of the development tools and already knowing how to use them
  2. The openness of the platform and ability to use open source
  3. Microsoft’s understanding and command of the enterprise market segment

Furthermore, Sensoria is using the Microsoft cloud and the Azure IoT platform to build a connected medical device platform, as they continue to develop new patient monitoring devices, like their smart sock v2.0 and Sensoria® Core, that drive improved outcomes for a variety of conditions. 

Learn more

Want to learn more about Microsoft and our work in healthcare? Check out our healthcare microsite, detailing our approach to the cloud and security, as well as compelling customer stories from Ochsner Health, BD, and others.

Caesars Entertainment bets on the Microsoft cloud, drives innovation in gaming, hospitality, and entertainment

Announcing .NET Core 3.0 Preview 7

$
0
0

Today, we are announcing .NET Core 3.0 Preview 7. We’ve transitioned from creating new features to polishing the release. Expect a singular focus on quality for the remaining preview releases.

Download .NET Core 3.0 Preview 7 right now on Windows, macOS and Linux.

ASP.NET Core and EF Core are also releasing updates today.

The Microsoft .NET Site has been updated to .NET Core 3.0 Preview 7 (see the version displayed in the website footer). It’s been running successfully on Preview 7 for over two weeks, on Azure WebApps (as a self-contained app). We will likely move the site to Preview 8 builds in a couple of weeks.

ICYMI, check out the improvements we released in .NET Core 3.0 Preview 6 and the June Update on WPF, both from last month.

Go Live

NET Core 3.0 Preview 7 is supported by Microsoft and can be used in production. We strongly recommend that you test your app running on Preview 7 before deploying Preview 7 into production. If you find an issue with .NET Core 3.0, please file a GitHub issue and/or contact Microsoft support.

We intend to make very few changes after Preview 7 for most APIs. Notable exceptions are: WPF, Windows Forms, Blazor and Entity Framework. Any breaking changes after Preview 7 will be documented.

We are working to ensure a high degree of compatibility with .NET Core 1.x and 2.x apps, making it straightforward to upgrade existing apps to .NET Core 3.0.

.NET Core SDK Size Improvements

The .NET Core SDK is significantly smaller with .NET Core 3.0. The primary reason is that we changed the way we construct the SDK, by moving to purpose-built “packs” of various kinds (reference assemblies, frameworks, templates). In previous versions (including .NET Core 2.2), we constructed the SDK from NuGet packages, which included many artifacts that were not required and wasted a lot of space.

You can see how we calculated these file sizes in the .NET Core 3.0 SDK Size Improvements gist. Detailed instructions are provided so that you can run the same tests in your own environment.

.NET Core 3.0 SDK Size (size change in brackets)

Operating System Installer Size (change) On-disk Size (change)
Windows 164MB (-440KB; 0%) 441MB (-968MB; -68.7%)
Linux 115MB (-55MB; -32%) 332MB (-1068MB; -76.2%)
macOS 118MB (-51MB; -30%) 337MB (-1063MB; -75.9%)

The size improvements for Linux and macOS are dramatic. The improvement for Windows is smaller because we have added WPF and Windows Forms as part of .NET Core 3.0. It’s amazing that we added WPF and Windows Forms in 3.0 and the installer is still (a little bit) smaller.

You can see the same benefit with .NET Core SDK Docker images (here, limited to x64 Debian and Alpine).

Distro 2.2 Compressed Size 3.0 Compressed Size
Debian 598MB 264MB
Alpine 493MB 148MB

Closing

The .NET Core 3.0 release is coming close to completion, and the team is solely focused on stability and reliability now that we’re no longer building new features. Please tell us about any issues you find, ideally as quickly as possible. We want to get as many fixes in as possible before we ship the final 3.0 release.

We recommend that you start planning to adopt .NET Core 3.0. This recommendation is stronger if you are using containers. The 3.0 improvements for containers are critical for anyone using docker resource limits directly or via an orchestrator.

If you install daily builds, please read an important PSA on .NET Core master branches.

The post Announcing .NET Core 3.0 Preview 7 appeared first on .NET Blog.

Always-on, real-time threat protection with Azure Cosmos DB – part one

$
0
0

This two-part blog post is a part of a series about how organizations are using Azure Cosmos DB to meet real world needs, and the difference it’s making to them. In part one, we explore the challenges that led the Microsoft Azure Advanced Threat Protection team to adopt Azure Cosmos DB and how they’re using it. In part two, we’ll examine the outcomes resulting from the team’s efforts.

Transformation of a real-time security solution to cloud scale

Microsoft Azure Advanced Threat Protection is a cloud-based security service that uses customers’ on-premises Azure Active Directory signals to identify, detect, and investigate advanced threats, compromised identities, and malicious insider actions. Launched in 2018, it represents the evolution of Microsoft Advanced Threat Analytics, an on-premises solution, into Azure. Both offerings are composed of two main components:

  1. An agent, or sensor, which is installed on each of an organization’s domain controllers. The sensor inspects traffic sent from users to the domain controller along with Event Tracing for Windows (ETW) events generated by the domain controller, sending that information to a centralized back-end.
  2. A centralized back-end, or center, which aggregates the information from all the sensors, learns the behavior of the organization’s users and computers, and looks for anomalies that may indicate malicious activity.

Advanced Threat Analytics’ center used an on-premises instance of MongoDB as its main database—and still does today for on-premises installations. However, in developing the Azure Advanced Threat Protection center, a managed service in the cloud, Microsoft needed something more performant and scalable. “The back-end of Azure Advanced Threat Protection needs to massively scale, be upgraded on a weekly basis, and run continuously-evolving, advanced detection algorithms—essentially taking full advantage of all the power and intelligence that Azure offers,” explains Yaron Hagai, Principal Group Engineering Manager for Advanced Threat Analytics at Microsoft.

In searching for the best database for Azure Advanced Threat Protection to store its entities and profiles—the data learned in real time from all the sensors about each organization’s users and computers—Hagai’s team mapped out the following key requirements:

  • Elastic, per-customer scalability: Each organization that adopts Azure Advanced Threat Protection can install hundreds of sensors, generating potentially tens of thousands of events per second. To learn each organization’s baseline and apply its anomaly detection algorithms in real-time, Azure Advanced Threat Protection needed a database that could efficiently and cost-effectively scale.
  • Ease of migration: The Azure Advanced Threat Protection data model is constantly evolving to support changes in detection logic. Hagai’s team didn’t want to worry about constantly maintaining backwards compatibility between the service’s code and its ever-changing data model, which meant they needed a database that could support quick and easy data migration with almost every new update to Azure Advanced Threat Protection they deployed.
  • Geo-replication: Like all Azure services, Advanced Threat Protection must support customers’ critical disaster recovery and business continuity needs, including in the highly unlikely event of a datacenter failure. Through the use of geo-replication, customers’ data can be replicated from a primary datacenter to a backup datacenter, and the Azure Advanced Threat Protection workload can be switched to the backup datacenter in the event of a primary datacenter failure.

A managed, scalable, schema-less database in the cloud

The team chose Azure Cosmos DB as the back-end database for Azure Advanced Threat Protection. “As the only managed, scalable, schema-less database in Azure, Azure Cosmos DB was the obvious choice,” says Hagai. “It offered the scalability needed to support our growing customer base and the load that growth would put on our back-end service. It also provided the flexibility needed in terms of the data we store on each organization and its computers and users. And it offered the flexibility needed to continually add new detections and modify existing ones, which in turn requires the ability to constantly change the data stored in our Azure Cosmos DB collections.”

Azure Advanced Threat Protection diagram

Collections and partitioning

Of the many APIs that Azure Cosmos DB supports, the development team considered both the SQL API and the Azure Cosmos DB API for MongoDB for Azure Advanced Threat Protection. Eventually, they chose the SQL API because it gave them access to a rich, Microsoft-authored client SDK with support for multi-homing across global regions, and direct connectivity mode for low latency. Developers chose to allocate one Azure Cosmos DB database per tenant, or customer. Each database has five collections, which each start with a single partition. “This allows us to easily delete the data for a customer if they stop using Azure Advanced Threat Protection,” explains Hagai. “More importantly, however, it lets us scale each customer’s collections independently based on the throughput generated by their on-premises sensors.”

Of the set of collections per customer, two usually grow to more than one partition:

  • UniqueEntity, which contains all the metadata about the computers and users in the organization, as synchronized from Active Directory.
  • UniqueEntityProfile, which contains the behavioral baseline for each entity in the UniqueEntity collection and is used by detection logic to identify behavioral anomalies that imply a compromised user or computer, or a malicious insider.

“Both collections have very high read/write throughput with large Request Units per second (RU/s) consumption,” explains Hagai. “Azure Cosmos DB seamlessly scales out storage of collections as they grow, and some of large customers have scaled up to terabytes in size per collection, which would have not been possible with MongoDB on VMs.”

The other three collections for each customer typically contain less than 1,000 documents and do not grow past a single partition. They include:

  • SystemProfile, which contains data learned for the tenant and applied to behavioral based detections.
  • SystemEntity, which contains configuration information and data about tenants.
  • Alert, which contains alerts that are generated and updated by Azure Advanced Threat Protection.

Migration

As the Azure Advanced Threat Protection detection logic constantly evolves and improves, so does the behavioral data stored in each customer’s UniqueEntityProfile collection. To avoid the need for backwards compatibility with outdated schemas, Azure Advanced Threat Protection maintains two migration mechanisms, which run with each upgrade to the service that includes changes to its data models:

  • On-the-fly: As Azure Advanced Threat Protection reads documents from Azure Cosmos DB, it checks their version field. If the version is outdated, Azure Advanced Threat Protection migrates the document to the current version using explicit transformation logic written by Hagai’s team of developers.
  • Batch: After a successful upgrade, Azure Advanced Threat Protection spins up a scheduled task to migrate all documents for all customers to the newest version, excluding those that have already been migrated by the on-the-fly mechanism.

Together, these two migration mechanisms ensure that after the service was upgraded and the data access layer code was changed, no errors will occur due to parsing outdated documents. No backwards compatibility code is needed besides the explicit migration code, which is always removed in the subsequent version.

Automatic scaling and backups

Collections with very high read/write throughput often are rate-limited as they reach their provisioned RU/s limits for a collection. When one of the service’s nodes, each node is a virtual machine, tries to perform an operation against a collection and gets a “429 Too Many Requests” rate limiting exception, it uses Azure Service Fabric remoting to send a request to a centralized auto-scale service for increased throughput. The centralized service aggregates such requests from multiple nodes to avoid increasing throughput more than once within a short window of time, as this may be caused by a single burst of throughput that affects multiple nodes. To minimize overall RU/s costs, a similar, periodic scale-down process reduces provisioned throughput when appropriate, such as during each customer’s non-working hours.

Azure Advanced Threat Protection takes advantage of the auto-backup feature of Azure Cosmos DB to help protect each of the collections. The backups reside in Azure Blob storage and are replicated to another region through the use of geo-redundant storage (GRS). Azure Advanced Threat Protection also replicates customer configuration data to another region, which allows for quick recovery in the case of a disaster. “We do this primarily to safeguard the sensor configuration data—preventing the need for an IT admin to reconfigure hundreds of sensors if the original database is lost,” explains Hagai.

Azure Advanced Threat Protection recently began onboarding full geo-replication. “We’ve started to enable geo-replication and multi-region writes for seamless and effortless replication of our production data to another region,” says Hagai. “This will allow us to further improve and guarantee service availability and will simplify service delivery versus having to maintain our own high-availability mechanisms.”

Continue on to part two, which covers the outcomes resulting from the Azure Advanced Threat Protection team’s implementation of Azure Cosmos DB.

Always-on, real-time threat protection with Azure Cosmos DB – part two

$
0
0

This two-part blog post is a part of a series about how organizations are using Azure Cosmos DB to meet real world needs, and the difference it’s making to them. In part one, we explored the challenges that led the Microsoft Azure Advanced Threat Protection team to adopt Azure Cosmos DB and how they’re using it. In part two, we’ll examine the outcomes resulting from the team’s efforts.

Built-in scalability, performance, availability, and more

The Azure Advanced Threat Protection team’s decision to use Azure Cosmos DB for its cloud-based security service has enabled the team to meet all key requirements, including zero database maintenance, uncompromised real-time performance, elastic scalability, high availability, and strong security and compliance. “Azure Cosmos DB gives us everything we need to deliver an enterprise-grade security service that’s capable of supporting the largest companies in the world, including Microsoft itself,” says Yaron Hagai, Principal Group Engineering Manager for Advanced Threat Analytics at Microsoft.

Zero maintenance

A managed database service has saved Hagai’s team immense maintenance efforts, allowing Azure Advanced Threat Protection to stay up and running with only a handful of service engineers. “Azure Advanced Threat Protection saves us from having to patch and upgrade servers, worry about compliance, and so on,” says Hagai. “We also get capabilities like encryption at rest without any work on our part, which further enables us to direct our resources to improving the service instead of keeping it up and running.”

Scaling to support customer growth is just as hands-free. “We use Azure CLI scripts to provision and deprovision clusters in multiple Azure regions—it’s all done automatically, so new clusters for new customers can be deployed easily and when needed,” says Hagai. “Scaling is also automatic. Throughput-based splitting has been especially helpful because it lets our databases scale to support customer growth with zero maintenance from the team.”

Real-time performance

Azure Cosmos DB is delivering the performance needed for an important security service like Azure Advanced Threat Protection. “Since we protect organizations after they have been breached, speed of detection is essential to minimizing the damage that might be done,” explains Hagai. “A high-throughout, super-scalable database lets us support lots of complex queries in real-time, which is what allows us to go from breach to alerting in seconds. The performance provided by Azure Cosmos DB is one more thing that makes it the most production-grade document DB in the market, which is another reason we chose it.”

The following graph shows sustained high throughout for the service’s largest tenant, with a heavy bias towards writes, which happen every 10 minutes as Azure Advanced Threat Protection persists in-memory caches of profiles to Azure Cosmos DB.

Graph showing sustained high throughout for the service’s largest tenant

Elastic scalability

Since Azure Advanced Threat Protection launched in March 2018, its usage has grown exponentially in terms of both users protected and paying organizations. “Azure Cosmos DB allows us to scale constantly, without any friction, which has helped us support a 600 percent growth in our customer base over the past year,” says Hagai. “That same scalability allows us to support larger customer installations than we could with Microsoft Advanced Threat Analytics, our on-premises solution. Microsoft’s own internal network is a prime example; it had grown too large to support with a single, on-premises server running Mongo DB, but with Azure Cosmos DB, it’s no problem.”

Scaling up and down to support frequent fluctuations in traffic, as shown in the following graph, is just as painless. “The graph shows traffic for our largest tenant, with the spikes in throughout due to scheduled tasks that produce business telemetry,” he explains. “This is a great example of the auto-scaling benefits of Azure Cosmos DB and how they allow us to automatically scale up individual databases to support a short burst of throughput each day, then automatically scale back down after the telemetries are calculated to minimize our service delivery costs.”

Graph showing traffic for a large tenant with the spikes in throughout due to scheduled tasks that produce business telemetry

Strong security and compliance

Because Azure Advanced Threat Protection is built on Azure Cosmos DB and other Azure services, which themselves have high compliance certifications, it was easy to achieve the same for Azure Advanced Threat Protection. “The access control mechanisms in Azure Cosmos DB allow us to easily secure access and apply advanced JIT policies, helping us keep customer data secure,” says Hagai.

High availability

Although the availability SLA for Azure Cosmos DB is 99.999 percent for multi-region databases, to Hagai, the actual availability they’ve seen in production is even higher. “I had the Azure Cosmos DB team pull some historical availability numbers, and it turns out that the actual availability we’ve seen during April, May, and June of 2019 has been between 99.99995 and 99.99999 percent,” says Hagai. “To us, that’s essentially 100 percent uptime, and another thing we don’t need to worry about.”

Learn more about Azure Advanced Threat Protection and Azure Cosmos DB today.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>