Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Leverage Azure premium file shares for high availability of data

$
0
0

This post was co-authored by Mike Emard Principal Program Manager, Azure Storage. 

SQL Server on Azure virtual machines brings cloud agility, elasticity, and scalability benefits to SQL Server workloads. SQL virtual machine offers full control on the operating system, virtual machine size, storage subsystem, and the level of manageability needed for your workload. Preconfigured SQL Server image from Azure Marketplace comes with free SQL Server manageability benefits like Automated Backup and Automated Patching. If you choose to self-install SQL Server on Azure virtual machines then you can register with SQL virtual machine resource provider to get all the benefits available to SQL marketplace images and simplified license management.

Microsoft provides an availability SLA of 99.95 percent that covers just the virtual machine not SQL Server. For SQL Server high availability on Azure virtual machines, you should host at least two virtual machine instances in an availability set (for availability at 99.95 percent) or different availability zones (for availability at 99.99 percent) and configure a high availability feature for SQL Server, such as Always On availability groups or failover cluster instance.

Today, we are announcing a new option for SQL Server high availability with SQL Server failover cluster with Azure premium file shares. Premium file shares are solid-state drive backed consistent low latency files shares that are fully supported for use with SQL Server failover cluster instance for SQL Server 2012 and above on Windows Server 2012 and above.

Azure premium file shares offer the following key advantages for SQL Server failover cluster instance

Ease of management

  • File shares are fully managed by Azure. 
  •  Provisioning is very simple. 
  •  Resize capacity in seconds with zero downtime by setting a property of the share. Increasing your storage capacity as your database grows is simple and does not cause unavailability, there is no need to provision lots of extra storage up front.
  •  Increase input/output per second (IOPS) in seconds with zero downtime by resizing your share. Increase the size of your premium share to get the IOPS your workload needs. 
  •  Seasonal workloads can temporarily increase IOPS and resize back down as easily as increasing. Again, zero downtime! 

Lower the work on your virtual machines

  •  Input or output is offloaded to your managed file share so you may be able to use a smaller, less expensive, virtual machine. 

Burstable input or output capacity

  •  Premium file share (PFS) provides automated bursting for IOPS capacity up to a limit based on a credit system. If your workload needs occasional bursts, then you should leverage this free and fully automated input or output capacity. Follow the premium files provisioning and bursting documentation to learn more.

Zonal Redundancy

  •  Zonally redundant storage available in some regions. You can deploy SQL Server failover cluster instance with one virtual machine in one availability zone and another in a different zone to achieve 99.99 percent high availability both for compute and the storage.

Premium file shares provide IOPS and throughout capacity that will meet the needs of many workloads. However, for input or output intensive workloads, consider SQL Server failover cluster instance with storage spaces direct based on managed premium disks or ultra-disks. You should check the IOPS activity of your current environment and verify that premium files will provide the IOPS you need before starting a migration. Use Windows Performance Monitor disk counters and monitor total IOPS (disk transfers per sec) and throughput (disk bytes per sec) required for SQL Server data, log and temp data base files. Many workloads have bursting input or output so it is a good idea to check during heavy usage periods and note the max IOPS as well as average IOPS. Premium file shares provide IOPS based on the size of the share. Premium files also provide complimentary bursting where you can burst your input or output to triple the baseline amount for up to one hour.

Use the step by step guide for configuring SQL failover cluster instance with Azure premium files to configure a SQL Server failover cluster instance with PFS today and leverage the new technologies and innovations Azure provides to modernize SQL Server workloads. Please share your feedback through UserVoice. We look forward to hearing from you!


Update Visual Studio for Mac for an improved Unity experience!

$
0
0

glasses focused on Visual Studio for Mac and Unity on a computer

The past year has been an exciting one for Unity developers. Unity is the leading real-time 3D creation platform. It’s rooted in game development and expanding into other industries, too. Unity Reflect enables developers to view BIM (Building Information Modeling) data in real-time. Did you know that over 60% of AR/VR content is created with Unity? Likewise, there were also many other innovations for film, automotive, and marketing announced. With all that excitement, you may have missed the debugging improvements and productivity features we’ve made to Visual Studio for Mac.

Update for better debugging on Mac

Earlier this month we talked about the New Editor in Visual Studio for Mac and the shared language services with Visual Studio. That brings a native experience, new features, and closer parity of experiences. Visual Studio Tools for Unity continues that theme by sharing the debugger experience for Unity projects between the two IDEs. This means if you’re working on a Mac, you’ll get an improved experience from the enhancements we made on Windows over previous years. We’re already hearing great feedback from developers on these improvements. That’s not all, let’s look at the other improvements we’ve made to Visual Studio for Mac.

Easily follow best practices and focus on what matters

I’m excited to share that Visual Studio for Mac and Visual Studio now have diagnostics unique to Unity projects. To begin, we’ve started with over a dozen Unity-specific scenarios where both IDEs recognize and offer informational suggestions or refactoring options. Even better, the IDE suppresses general C# diagnostics that don’t apply to Unity projects. For example, it will no longer recommend that you change your [SerializeField] members to readonly and break Unity Inspector links. This means you’ll find less noise in your error list, allowing you to focus on what really matters – scripting your games. To further improve on this, we plan to partner with you and the rest of the community by open sourcing these diagnostics. Together, we’ll improve how the IDE impacts your productivity and best practices.

Visual Studio for Mac showing refactoring of Unity specific diagnostics

Iterate faster with background-building and more

We’re continuing the productivity theme with more quality of life improvements. We noticed that developers don’t always need Visual Studio for Mac to build, which resulted in longer wait times. We made it faster to iterate and debug by reducing the (sometimes) unnecessary work the IDE was doing. In addition, save time with background building using the Automatic refreshing of Unity’s AssetDatabase on save setting. True by default, this triggers Unity to compile in the background while you’re editing in Visual Studio for Mac.

Finally, save some time by using Attach to Unity and Play when starting a debug session. This attaches the debugger to Unity and signals for the Editor to Play all in a single step. You can seamlessly iterate on your debugging workflow right from the IDE.

animated image showing Attach to Unity and Play from Visual Studio for Mac

Ready for a more reliable debugger and productive scripting experience?

Now is a great time to update to the latest version of Visual Studio for Mac to take advantage of all the improvements and time-saving productivity features for Unity projects. We’re continuing improvements for Visual Studio for Mac by listening to your feedback and sharing code between the IDEs. If you’d like to help make these tools better, share your feedback and report issues on Developer Community. Be sure to follow @VisualStudioMac for the latest news and updates!

The post Update Visual Studio for Mac for an improved Unity experience! appeared first on Visual Studio Blog.

.NET Framework October 2019 Cumulative Updates Windows 10 version 1903 and Windows 10 version 1909

$
0
0

Today, we are releasing the October 2019 Cumulative Updates for .NET Framework 3.5 and 4.8 on Windows 10 version 1903 and Windows 10 version 1909

Quality and Reliability

This release contains the following quality and reliability improvements.

ASP.NET

  • Addresses an issue with ValidationContext.MemberName when using custom DataAnnotation.ValidationAttribute.

CLR1

  • Reduces the risk of returning unordered ConstructorInfo’s from Type.GetConstructors().
  • Improved behavior in scenarios where external bugs (such as a double-free) prevent underlying OS threads from starting. The runtime now fails with a diagnostic error rather than hanging waiting for the thread to start. This allows better failure recovery and better diagnostics of the problem that caused the failure.
  • Addresses and issue with late bound .NET COM calls containing SafeArrays where the SafeArray is not fully initialized.

Windows Forms

  • Addresses an issue that prevented navigation to the last item of the dropped-down menu item by single up-arrow key press.
  • Addresses and issue where the property grid can throw a NullReferenceException when selection changes to null (nothing is selected) in response to value changes

WPF2

  • Addresses and issue where software rendering fails to draw images whose position and scaling are too large. For example, an Image element with Width=10, sourced to a bitmap with width=500 and positioned 700 pixels from the left edge of the enclosing window, fails to appear because the scaling factor S = 500/10 = 50 and the position X=700 are too large, in the sense that their product S * X = 50 * 700 = 35000 exceeds 2^15 = 32768.

1 Common Language Runtime (CLR)
2 Windows Presentation Foundation (WPF)

Getting the Update

The Cumulative Update is available via Windows Update, Windows Server Update Services (WSUS) and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10, NET Framework 4.8 updates are available via Windows Update, Windows Server Update Services, Microsoft Update Catalog. Updates for other versions of .NET Framework are part of the Windows 10 Monthly Cumulative Update.

Note: Customers that rely on Windows Update and Windows Server Update Services will automatically receive the .NET Framework version-specific updates. Advanced system administrators can also take use of the below direct Microsoft Update Catalog download links to .NET Framework-specific updates. Before applying these updates, please ensure that you carefully review the .NET Framework version applicability, to ensure that you only install updates on systems where they apply.

Product Version Cumulative Update
Windows 10 1909 and Windows Server, version 1909 4522742
.NET Framework 3.5, 4.8 Catalog 4519573
Windows 10 1903 and Windows Server, version 1903 4522741
.NET Framework 3.5, 4.8 Catalog 4519573

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

The post .NET Framework October 2019 Cumulative Updates Windows 10 version 1903 and Windows 10 version 1909 appeared first on .NET Blog.

Announcing TypeScript 3.7 RC

$
0
0

We’re pleased to announce TypeScript 3.7 RC, the release candidate of TypeScript 3.7. Between now and the final release, we expect no further changes except for critical bug fixes.

To get started using the RC, you can get it through NuGet, or use npm with the following command:

npm install typescript@rc

You can also get editor support by

TypeScript 3.7 RC includes some of our most highly-requested features!

Let’s dive in and see what’s new, starting with the highlight feature of 3.7: Optional Chaining.

Optional Chaining

TypeScript 3.7 implements one of the most highly-demanded ECMAScript features yet: optional chaining! Our team has been heavily involved in TC39 to champion the feature to Stage 3 so that we can bring it to all TypeScript users.

So what is optional chaining? Well at its core, optional chaining lets us write code where we can immediately stop running some expressions if we run into a null or undefined. The star of the show in optional chaining is the new ?. operator for optional property accesses. When we write code like

let x = foo?.bar.baz();

this is a way of saying that when foo is defined, foo.bar.baz() will be computed; but when foo is null or undefined, stop what we’re doing and just return undefined.”

More plainly, that code snippet is the same as writing the following.

let x = (foo === null || foo === undefined) ?
    undefined :
    foo.bar.baz();

Note that if bar is null or undefined, our code will still hit an error accessing baz. Likewise, if baz is null or undefined, we’ll hit an error at the call site. ?. only checks for whether the value on the left of it is null or undefined – not any of the subsequent properties.

You might find yourself using ?. to replace a lot of code that performs intermediate property checks using the && operator.

// Before
if (foo && foo.bar && foo.bar.baz) {
    // ...
}

// After-ish
if (foo?.bar?.baz) {
    // ...
}

Keep in mind that ?. acts differently than those && operations since && will act specially on “falsy” values (e.g. the empty string, 0, NaN, and, well, false).

Optional chaining also includes two other operations. First there’s optional element access which acts similarly to optional property accesses, but allows us to access non-identifier properties (e.g. arbitrary strings, numbers, and symbols):

/**
 * Get the first element of the array if we have an array.
 * Otherwise return undefined.
 */
function tryGetFirstElement<T>(arr?: T[]) {
    return arr?.[0];
    // equivalent to
    //   return (arr === null || arr === undefined) ?
    //       undefined :
    //       arr[0];
}

There’s also optional call, which allows us to conditionally call expressions if they’re not null or undefined.

async function makeRequest(url: string, log?: (msg: string) => void) {
    log?.(`Request started at ${new Date().toISOString()}`);
    // equivalent to
    //   if (log !== null && log !== undefined) {
    //       log(`Request started at ${new Date().toISOString()}`);
    //   }

    const result = (await fetch(url)).json();

    log?.(`Request finished at at ${new Date().toISOString()}`);

    return result;
}

The “short-circuiting” behavior that optional chains have is limited to both “ordinary” and optional property accesses, calls, element accesses – it doesn’t expand any further out from these expressions. In other words,

let result = foo?.bar / someComputation()

doesn’t stop the division or someComputation() call from occurring. It’s equivalent to

let temp = (foo === null || foo === undefined) ?
    undefined :
    foo.bar;

let result = temp / someComputation();

That might result in dividing undefined, which is why in strictNullChecks, the following is an error.

function barPercentage(foo?: { bar: number }) {
    return foo?.bar / 100;
    //     ~~~~~~~~
    // Error: Object is possibly undefined.
}

More more details, you can read up on the proposal and view the original pull request.

Nullish Coalescing

The nullish coalescing operator is another upcoming ECMAScript feature that goes hand-in-hand with optional chaining, and which our team has been deeply involved in championing.

You can think of this feature – the ?? operator – as a way to “fall back” to a default value when dealing with null or undefined. When we write code like

let x = foo ?? bar();

this is a new way to say that the value foo will be used when it’s “present”; but when it’s null or undefined, calculate bar() in its place.

Again, the above code is equivalent to the following.

let x = (foo !== null && foo !== undefined) ?
    foo :
    bar();

The ?? operator can replace uses of || when trying to use a default value. For example, the following code snippet tries to fetch the volume that was last saved in localStorage (if it ever was); however, it has a bug because it uses ||.

function initializeAudio() {
    let volume = localStorage.volume || 0.5

    // ...
}

When localStorage.volume is set to 0, the page will set the volume to 0.5 which is unintended. ?? avoids some unintended behavior from 0, NaN and "" being treated as falsy values.

We owe a large thanks to community members Wenlu Wang and Titian Cernicova Dragomir for implementing this feature! For more details, check out their pull request and the nullish coalescing proposal repository.

Assertion Functions

There’s a specific set of functions that throw an error if something unexpected happened. They’re called “assertion” functions. As an example, Node.js has a dedicated function for this called assert.

assert(someValue === 42);

In this example if someValue isn’t equal to 42, then assert will throw an AssertionError.

Assertions in JavaScript are often used to guard against improper types being passed in. For example,

function multiply(x, y) {
    assert(typeof x === "number");
    assert(typeof y === "number");

    return x * y;
}

Unfortunately in TypeScript these checks could never be properly encoded. For loosely-typed code this meant TypeScript was checking less, and for slightly conservative code it often forced users to use type assertions..

function yell(str) {
    assert(typeof str === "string");

    return str.toUppercase();
    // Oops! We misspelled 'toUpperCase'.
    // Would be great if TypeScript still caught this!
}

The alternative was to instead rewrite the code so that the language could analyze it, but this isn’t convenient.

function yell(str) {
    if (typeof str !== "string") {
        throw new TypeError("str should have been a string.")
    }
    // Error caught!
    return str.toUppercase();
}

Ultimately the goal of TypeScript is to type existing JavaScript constructs in the least disruptive way. For that reason, TypeScript 3.7 introduces a new concept called “assertion signatures” which model these assertion functions.

The first type of assertion signature models the way that Node’s assert function works. It ensures that whatever condition is being checked must be true for the remainder of the containing scope.

function assert(condition: any, msg?: string): asserts condition {
    if (!condition) {
        throw new AssertionError(msg)
    }
}

asserts condition says that whatever gets passed into the condition parameter must be true if the assert returns (because otherwise it would throw an error). That means that for the rest of the scope, that condition must be truthy. As an example, using this assertion function means we do catch our original yell example.

function yell(str) {
    assert(typeof str === "string");

    return str.toUppercase();
    //         ~~~~~~~~~~~
    // error: Property 'toUppercase' does not exist on type 'string'.
    //        Did you mean 'toUpperCase'?
}

function assert(condition: any, msg?: string): asserts condition {
    if (!condition) {
        throw new AssertionError(msg)
    }
}

The other type of assertion signature doesn’t check for a condition, but instead tells TypeScript that a specific variable or property has a different type.

function assertIsString(val: any): asserts val is string {
    if (typeof val !== "string") {
        throw new AssertionError("Not a string!");
    }
}

Here asserts val is string ensures that after any call to assertIsString, any variable passed in will be known to be a string.

function yell(str: any) {
    assertIsString(str);

    // Now TypeScript knows that 'str' is a 'string'.

    return str.toUppercase();
    //         ~~~~~~~~~~~
    // error: Property 'toUppercase' does not exist on type 'string'.
    //        Did you mean 'toUpperCase'?
}

These assertion signatures are very similar to writing type predicate signatures:

function isString(val: any): val is string {
    return typeof val === "string";
}

function yell(str: any) {
    if (isString(str)) {
        return str.toUppercase();
    }
    throw "Oops!";
}

And just like type predicate signatures, these assertion signatures are incredibly expressive. We can express some fairly sophisticated ideas with these.

function assertIsDefined<T>(val: T): asserts val is NonNullable<T> {
    if (val === undefined || val === null) {
        throw new AssertionError(
            `Expected 'val' to be defined, but received ${val}`
        );
    }
}

To read up more about assertion signatures, check out the original pull request.

Better Support for never-Returning Functions

As part of the work for assertion signatures, TypeScript needed to encode more about where and which functions were being called. This gave us the opportunity to expand support for another class of functions: functions that return never.

The intent of any function that returns never is that it never returns. It indicates that an exception was thrown, a halting error condition occurred, or that the program exited. For example, process.exit(...) in @types/node is specified to return never.

In order to ensure that a function never potentially returned undefined or effectively returned from all code paths, TypeScript needed some syntactic signal – either a return or throw at the end of a function. So users found themselves return-ing their failure functions.

function dispatch(x: string | number): SomeType {
    if (typeof x === "string") {
        return doThingWithString(x);
    }
    else if (typeof x === "number") {
        return doThingWithNumber(x);
    }
    return process.exit(1);
}

Now when these never-returning functions are called, TypeScript recognizes that they affect the control flow graph and accounts for them.

function dispatch(x: string | number): SomeType {
    if (typeof x === "string") {
        return doThingWithString(x);
    }
    else if (typeof x === "number") {
        return doThingWithNumber(x);
    }
    process.exit(1);
}

As with assertion functions, you can read up more at the same pull request.

(More) Recursive Type Aliases

Type aliases have always had a limitation in how they could be “recursively” referenced. The reason is that any use of a type alias needs to be able to substitute itself with whatever it aliases. In some cases, that’s not possible, so the compiler rejects certain recursive aliases like the following:

type Foo = Foo;

This is a reasonable restriction because any use of Foo would need to be replaced with Foo which would need to be replaced with Foo which would need to be replaced with Foo which… well, hopefully you get the idea! In the end, there isn’t a type that makes sense in place of Foo.

This is fairly consistent with how other languages treat type aliases, but it does give rise to some slightly surprising scenarios for how users leverage the feature. For example, in TypeScript 3.6 and prior, the following causes an error.

type ValueOrArray<T> = T | Array<ValueOrArray<T>>;
//   ~~~~~~~~~~~~
// error: Type alias 'ValueOrArray' circularly references itself.

This is strange because there is technically nothing wrong with any use users could always write what was effectively the same code by introducing an interface.

type ValueOrArray<T> = T | ArrayOfValueOrArray<T>;

interface ArrayOfValueOrArray<T> extends Array<ValueOrArray<T>> {}

Because interfaces (and other object types) introduce a level of indirection and their full structure doesn’t need to be eagerly built out, TypeScript has no problem working with this structure.

But workaround of introducing the interface wasn’t intuitive for users. And in principle there really wasn’t anything wrong with the original version of ValueOrArray that used Array directly. If the compiler was a little bit “lazier” and only calculated the type arguments to Array when necessary, then TypeScript could express these correctly.

That’s exactly what TypeScript 3.7 introduces. At the “top level” of a type alias, TypeScript will defer resolving type arguments to permit these patterns.

This means that code like the following that was trying to represent JSON…

type Json =
    | string
    | number
    | boolean
    | null
    | JsonObject
    | JsonArray;

interface JsonObject {
    [property: string]: Json;
}

interface JsonArray extends Array<Json> {}

can finally be rewritten without helper interfaces.

type Json =
    | string
    | number
    | boolean
    | null
    | { [property: string]: Json }
    | Json[];

This new relaxation also lets us recursively reference type aliases in tuples as well. The following code which used to error is now valid TypeScript code.

type VirtualNode =
    | string
    | [string, { [key: string]: any }, ...VirtualNode[]];

const myNode: VirtualNode =
    ["div", { id: "parent" },
        ["div", { id: "first-child" }, "I'm the first child"],
        ["div", { id: "second-child" }, "I'm the second child"]
    ];

For more information, you can read up on the original pull request.

--declaration and --allowJs

The --declaration flag in TypeScript allows us to generate .d.ts files (declaration files) from source TypeScript files like .ts and .tsx files. These .d.ts files are important because they allow TypeScript to type-check against other projects without re-checking/building the original source code. For the same reason, this setting is required when using project references.

Unfortunately, --declaration didn’t work with settings like --allowJs to allow mixing TypeScript and JavaScript input files. This was a frustrating limitation because it meant users couldn’t use --declaration when migrating codebases, even if they were JSDoc-annotated. TypeScript 3.7 changes that, and allows the two features to be mixed!

When using allowJs, TypeScript will use its best-effort understanding of JavaScript source code and save that to a .d.ts file in an equivalent representation. That includes all of its JSDoc smarts, so code like the following:

/**
 * @callback Job
 * @returns {void}
 */

/** Queues work */
export class Worker {
    constructor(maxDepth = 10) {
        this.started = false;
        this.depthLimit = maxDepth;
        /**
         * NOTE: queued jobs may add more items to queue
         * @type {Job[]}
         */
        this.queue = [];
    }
    /**
     * Adds a work item to the queue
     * @param {Job} work 
     */
    push(work) {
        if (this.queue.length + 1 > this.depthLimit) throw new Error("Queue full!");
        this.queue.push(work);
    }
    /**
     * Starts the queue if it has not yet started
     */
    start() {
        if (this.started) return false;
        this.started = true;
        while (this.queue.length) {
            /** @type {Job} */(this.queue.shift())();
        }
        return true;
    }
}

will currently be transformed into the following implementation-less .d.ts file:

/**
 * @callback Job
 * @returns {void}
 */
/** Queues work */
export class Worker {
    constructor(maxDepth?: number);
    started: boolean;
    depthLimit: number;
    /**
     * NOTE: queued jobs may add more items to queue
     * @type {Job[]}
     */
    queue: Job[];
    /**
     * Adds a work item to the queue
     * @param {Job} work
     */
    push(work: Job): void;
    /**
     * Starts the queue if it has not yet started
     */
    start(): boolean;
}
export type Job = () => void;

For more details, you can check out the original pull request.

Build-Free Editing with Project References

TypeScript’s project references provide us with an easy way to break codebases up to give us faster compiles. Unfortunately, editing a project whose dependencies hadn’t been built (or whose output was out of date) meant that the editing experience wouldn’t work well.

In TypeScript 3.7, when opening a project with dependencies, TypeScript will automatically use the source .ts/.tsx files instead. This means projects using project references will now see an improved editing experience where semantic operations are up-to-date and “just work”. You can disable this behavior with the compiler option disableSourceOfProjectReferenceRedirect which may be appropriate when working in very large projects where this change may impact editing performance.

You can read up more about this change by reading up on its pull request.

Uncalled Function Checks

A common and dangerous error is to forget to invoke a function, especially if the function has zero arguments or is named in a way that implies it might be a property rather than a function.

interface User {
    isAdministrator(): boolean;
    notify(): void;
    doNotDisturb?(): boolean;
}

// later...

// Broken code, do not use!
function doAdminThing(user: User) {
    // oops!
    if (user.isAdministrator) {
        sudo();
        editTheConfiguration();
    }
    else {
        throw new AccessDeniedError("User is not an admin");
    }
}

Here, we forgot to call isAdministrator, and the code incorrectly allows non-adminstrator users to edit the configuration!

In TypeScript 3.7, this is identified as a likely error:

function doAdminThing(user: User) {
    if (user.isAdministrator) {
    //  ~~~~~~~~~~~~~~~~~~~~
    // error! This condition will always return true since the function is always defined.
    //        Did you mean to call it instead?

This check is a breaking change, but for that reason the checks are very conservative. This error is only issued in if conditions, and it is not issued on optional properties, if strictNullChecks is off, or if the function is later called within the body of the if:

interface User {
    isAdministrator(): boolean;
    notify(): void;
    doNotDisturb?(): boolean;
}

function issueNotification(user: User) {
    if (user.doNotDisturb) {
        // OK, property is optional
    }
    if (user.notify) {
        // OK, called the function
        user.notify();
    }
}

If you intended to test the function without calling it, you can correct the definition of it to include undefined/null, or use !! to write something like if (!!user.isAdministrator) to indicate that the coercion is intentional.

We owe a big thanks to GitHub user @jwbay who took the initiative to create a proof-of-concept and iterated to provide us with with the current version.

// @ts-nocheck in TypeScript Files

TypeScript 3.7 allows us to add // @ts-nocheck comments to the top of TypeScript files to disable semantic checks. Historically this comment was only respected in JavaScript source files in the presence of checkJs, but we’ve expanded support to TypeScript files to make migrations easier for all users.

Semicolon Formatter Option

TypeScript’s built-in formatter now supports semicolon insertion and removal at locations where a trailing semicolon is optional due to JavaScript’s automatic semicolon insertion (ASI) rules. The setting is available now in Visual Studio Code Insiders, and will be available in Visual Studio 16.4 Preview 2 in the Tools Options menu.

New semicolon formatter option in VS Code

Choosing a value of “insert” or “remove” also affects the format of auto-imports, extracted types, and other generated code provided by TypeScript services. Leaving the setting on its default value of “ignore” makes generated code match the semicolon preference detected in the current file.

Breaking Changes

DOM Changes

Types in lib.dom.d.ts have been updated. These changes are largely correctness changes related to nullability, but impact will ultimately depend on your codebase.

Function Truthy Checks

As mentioned above, TypeScript now errors when functions appear to be uncalled within if statement conditions. An error is issued when a function type is checked in if conditions unless any of the following apply:

  • the checked value comes from an optional property
  • strictNullChecks is disabled
  • the function is later called within the body of the if

Local and Imported Type Declarations Now Conflict

Due to a bug, the following construct was previously allowed in TypeScript:

// ./someOtherModule.ts
interface SomeType {
    y: string;
}

// ./myModule.ts
import { SomeType } from "./someOtherModule";
export interface SomeType {
    x: number;
}

function fn(arg: SomeType) {
    console.log(arg.x); // Error! 'x' doesn't exist on 'SomeType'
}

Here, SomeType appears to originate in both the import declaration and the local interface declaration. Perhaps surprisingly, inside the module, SomeType refers exclusively to the imported definition, and the local declaration SomeType is only usable when imported from another file. This is very confusing and our review of the very small number of cases of code like this in the wild showed that developers usually thought something different was happening.

In TypeScript 3.7, this is now correctly identified as a duplicate identifier error. The correct fix depends on the original intent of the author and should be addressed on a case-by-case basis. Usually, the naming conflict is unintentional and the best fix is to rename the imported type. If the intent was to augment the imported type, a proper module augmentation should be written instead.

API Changes

To enable the recursive type alias patterns described above, the typeArguments property has been removed from the TypeReference interface. Users should instead use the getTypeArguments function on TypeChecker instances.

What’s Next?

The final release of TypeScript 3.7 will be released in a couple of weeks, ideally with minimal changes! We’d love for you to try this RC and give us your feedback to make sure 3.7 works great for everyone. If you have any suggestions or run into any problems, don’t hesitate to open an issue on our issue tracker!

Happy Hacking!

– Daniel Rosenwasser and the TypeScript Team

The post Announcing TypeScript 3.7 RC appeared first on TypeScript.

Optimizing Web Applications with OData $Select

$
0
0

OData as an API technology comes in with so many options that gives API consumers the power to shape, filter, order and navigate through the data with very few lines of code.

In my previous articles I talked in details about how to enable OData on your existing ASP.NET Core API using the EDM model, in addition to that I have provided a code example for you to be able to test the capabilities of OData on your own machine.

In this article, I suggest you keep the very same source code handy to learn more about one of the most powerful query options of OData, which is $select

Introduction to OData $Select

OData $select enables API consumers to shape the data they are consuming before the data is returned from the API endpoint they are calling.

For instance, let’s assume you have an API that returns information about all students in Washington Schools.

The Student model is defined as follows:

    public class Student
    {
        public Guid Id { get; set; }
        public string Name { get; set; }
        public int Score { get; set; }
        public byte[] Diploma { get; set; }
    }

In the student model above, a service that requires only students’ Names and Ids doesn’t need to pay the network latency tax for transferring the rest of the data an student object may contain.

For instance, the Diploma is of type byte[], which can contain up to 2 GB of data or 1 billion characters, in a network transportation process this could add up and cause a less than optimal service to service communication.

With OData select, you can only define the object members that you need for your process, this can be simply achieved by making the following API call:

https://localhost:44374/api/students?$select=Id, Name

The expected return value of a call like this would be as follows:

{
  "@odata.context": "https://localhost:44374/api/$metadata#Students(Id,Name)",
  "value": [
    {
      "Id": "1fa0248e-befa-4999-9c74-9c23dd747c63",
      "Name": "Ken Swan"
    },
    {
      "Id": "1833bb68-00f4-4133-913d-2394e90798ea",
      "Name": "Kailu Hu"
    },
    {
      "Id": "02acb647-b8cd-477a-a54f-ebad5a9ccdf6",
      "Name": "Jackie Lee"
    },
    {
      "Id": "5cda4467-72ae-4c86-919e-56fa902d9095",
      "Name": "Vishu Goli"
    }
  ]
}

Performance Advantages of Using OData

The advantage of leveraging OData Select on the server-side rather than doing the processing on the client-side is that you don’t have to worry about whether your client is going to be able to or have the required memory to process the data, instead it puts more control on the server-side where optimizations can be controlled and the hardware can be scaled for heavy lifting, which inevitably provide a seamless consistent user experience on the client-side from an API consumption perspective.

Non-Optimal Architecture

To give more visualization of this advantage consider the following architecture:

 

The problem with the architecture above, is that f(x) is being repeatedly applied for N number of clients, while repeatedly transferring unnecessarily large amounts of data from the server that eventually gets discarded on the client side.

In other words, the above architecture includes the cost of network traffic of large data sets in addition to the processing time.

Optimal Architecture

But with OData $select, the architecture would look like this:

 

In that architecture we have exponentially decreased the cost of mapping the data from N to 1, we have also decreased the network traffic costs by minimizing the data transferred to only the required information which should eventually produce a better execution time holistically across the entire system, which will also guarantee consistent UI performance.

Disabling OData $Select

There comes a time when you need to disable certain functions like select for some given business need, mainly to enforce the entire model to be returned to your API consumers.

Disabling the Select functionality could be implemented in three different ways:

 

Endpoint Level

On a particular endpoint level, you can leverage the options the EnableQuery() annotation offers you to select which functionality you would like to allow or disallow, in this case Select will be disabled by applying the following parameter:

[HttpGet]
        [EnableQuery(AllowedQueryOptions = Microsoft.AspNet.OData.Query.AllowedQueryOptions.None)]
        public ActionResult<IQueryable<Student>> Get()
        {
            ...
        }

You can use the same parameter options to allow or disallow any number of query options you would like, which we will extensively talk about in the next articles.

Controller Level

The other option is to completely disallow Select on the controller level, but moving the EnableQuery() annotation with the same restrictions as we mentioned above to the controller level as follows:

[EnableQuery(AllowedQueryOptions = Microsoft.AspNet.OData.Query.AllowedQueryOptions.None)]
    public class StudentsController : ControllerBase
    {
        ...
    }

 

Application Level

You can also disallow using Select query option across your entire application by removing the Select query parameter from your route builder setup as follows:

app.UseMvc(routeBuilder =>
            {
                routeBuilder.Select();
                routeBuilder.MapODataServiceRoute("api", "api", GetEdmModel());
            });

 

Final Notes

  1. There are so many great advantages in allowing your client to pick and choose which pieces of information they need from your API to fit their business.
  2. OData $select minimizes the effort on the client side when it comes to data mapping and processing, it handles all of that to ensure consistent user experience.
  3. The current implementation of OData can only support up to ASP.NET Core 2.2 and the team is working diligently to release OData for ASP.NET Core 3.0 by the 2nd Quarter of 2020.

The post Optimizing Web Applications with OData $Select appeared first on OData.

Top Stories from the Microsoft DevOps Community – 2019.10.25

$
0
0

Azure DevOps and Azure Cloud thrive through partnerships. I am especially grateful to our community members and partners who work with us on broadening that ecosystem. In this week’s newsletter, we will highlight integrations between Azure DevOps and a range of 3rd party and 1st party tools.

How to Use Project “Piper” Docker Images for CI/CD with Azure DevOps
SAP products are widely used across our industry, and Microsoft just extended our partnership with SAP. Project Piper is SAP’s open-source solution for Continuous Integration and Continuous Delivery, and you can now use Project Piper out of the box with Azure DevOps! In this guide, Sarah Noack walks us through setting up an Azure YAML Pipeline using the Piper Docker images. Thank you Sarah, for publishing this overview!

Sales Force Power Scripts
This new Azure DevOps extension is an integration between Azure DevOps and Sales Force. To quote the extension description, “SFPowerscripts is an Azure Pipelines Extension that converts Azure Pipelines into a CI/CD platform for Salesforce”. Thank you Azlam Abdulsalam for your work on the extension!

Integrate OpenCover with Azure DevOps
OpenCover is an open source code coverage tool for .NET applications. In this blog, Niels Nijveldt walks us through integrating OpenCover into an Azure Pipeline, using PowerShell scripts to automate the OpenCover report generation. Thank you, Niels!

Setting up Azure DevOps for your Data Factory – Part 3/3 – Create release pipeline
This week I came across two guides on using Azure Pipelines to deploy to Azure Data Factory, and I wanted to share both. This post from Helge Rege Gårdsvoll shows us how to set up a Release Pipeline for ADF in Azure DevOps. Thank you, Helge for putting this walkthrough together!

Continuous integration and delivery in Azure Data Factory using DevOps and GitHub – PASS Video
And in case you prefer video format, here is a video guide on the same topic from Rayis Imayev. In this virtual group session, Rayis talks about his experience from getting from manual deployments to Azure Data Factory to automating the deployments with Azure Pipelines. Thank you Rayis!

If you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.10.25 appeared first on Azure DevOps Blog.

Integrate your product roadmap to Azure Boards

$
0
0

This post was contributed by Andre Theus from ProductPlan

Product roadmaps and backlogs are two areas where product managers spend a lot of their time. They work in lock-step to bring exciting solutions to the market as quickly as possible.

Together they let you create a compelling visual roadmap that’s powered by the detailed data stored in your backlog.

Let’s look at the unique characteristics of these two distinct tools and how they combine forces to create even more value.

Backlogs vs. Strategic roadmaps

Backlogs and roadmaps are complementary elements of the product planning and development process, but they’re not interchangeable.

A backlog is a repository for development tasks, including bugs, features, enhancements and behind-the-scenes improvements. It’s a searchable, sortable list of tactical items that the product development team may be tasked with in the future. And when handled in a sophisticated management tool such as Azure Boards, backlog items can also be assigned levels of effort. Then grouped into larger epics, assigned to individuals, and have their completion progress tracked.

On the other hand, a strategic roadmap is, well, strategic. Although individual features may appear on the roadmap, it is generally focused on higher-level themes and goals. Roadmaps are directional, informative and used for communication and alignment, rather than offering specific instructions and deadlines.

To provide adequate flexibility, basing the roadmap on themes instead of features provides valuable flexibility, since it’s not certain how far product development teams will get during execution. Even though individual backlog items may shift in and out based on progress made and ongoing learning, the themes persist. There’s also not always a 1:1 relationship between roadmaps and backlogs; in some cases, there may even be multiple backlogs linked to a single roadmap.

It might be tempting to have the backlog guide product decisions. But the tactical nature of backlog items doesn’t provide adequate context to make those decisions. Instead, the organization’s strategy and goals should be the driver, with backlog items assigned based on their alignment with those overarching objectives.

A roadmap’s purpose

Roadmaps are intended to provide a strategic, big-picture view of where the product is headed. When well-constructed, they’re a powerful tool that is used for many purposes both within the organization and with external stakeholders.

Grouping stories and features by themes builds a logical narrative to ensure there’s real value with each release and not just incremental improvements spread randomly across the product. With a single view, everyone can see what’s currently in the pipeline and when they can expect it.

The roadmap is also not an exhaustive laundry list of features and enhancements. It’s the output of prioritization exercises to align actions with strategic company goals. Therefore, most items on the strategic roadmap should have corresponding success metrics that tie back to the KPIs defined by the strategy.

Product roadmaps are the ultimate communication tool

Product roadmaps are really all about communication—they’re the perfect canvas for encapsulating the priorities and direction of the product. Roadmaps can be used by a variety of audiences for different purposes. Most often this is to build consensus and get buy-in from stakeholders since a clear and visual medium is the most effective way to put the product’s direction into perspective.

Whether to include dates and how to use them in the roadmap is often debatable, but with a sophisticated roadmapping tool, such as ProductPlan, their inclusion can be turned on and off. It depends on which audience you want the roadmap to be shared with. Internal executives and product development teams can see exactly when things are expected, while sales teams and external viewers get a more limited view to avoid setting false expectations or creating disappointed customers if things slip.

Filtering for different audiences can also extend to the specific roadmap items and categories, such as limiting internal infrastructure and technical debt-related items to only the relevant audiences.

When roadmaps are created with dedicated tools versus a simple slide deck, they can also easily incorporate a rough level of effort estimates and real-time progress toward completion. These estimates allow a proper resource allocation and a more accurate view of when things will ship.

Aside from everything I mentioned above, it’s important to remember the value that roadmaps can bring to the product development team as well.

Individual backlog items don’t really tell you why something is being prioritized for the upcoming sprint. Then compare that to the opportunity that a roadmap review provides. Your team will have context on where they’re heading and how the sprint items will get them there.

Starting every sprint with a roadmap review (rather than a list of backlog items) provides some background to the team so everyone knows where we’re heading and how the sprint items will get us there.

How to use your backlog

The product backlog is the single source of truth for stories, bugs, and features—it is the grand repository for everything that could be done to improve the product and where product feedback turns into action. But because it’s so comprehensive, it’s far too unwieldy to use for strategic planning.

Instead, the backlog is where the execution of that strategy gets serious and the details and specificity left-out at the roadmapping level are embraced and enhanced. It’s where granular estimates reside, implementation details are welcomed, and there are no bad ideas—just ones that never get prioritized.

Although backlogs may be vast, their focus and value are really on the short-term objectives of the product development team.

Backlogs and roadmaps inform each other

When a backlog is first created, its initial items in the backlog come from the roadmap—these are the things you know you want to complete based on the strategy. Then the backlog is where everything is broken down into tasks and assigned to individuals.

After the product is released, feedback from customers will generate additional backlog items on a continual basis. Backlog grooming is essential! Remember, not all feedback is created equal to facilitate the efficient movement of items to the next stage.

Finally, the backlog can eventually be used as an input to the strategic roadmapping process.

Make the most of the ProductPlan and Azure Boards integration

ProductPlan’s integration with Azure Boards significantly simplifies how to create a product roadmap and then keeps it up-to-date in real-time. To get started, ProductPlan and Azure Boards need a one-time synchronization, and then your epics and stories can be quickly imported into ProductPlan and connected to the product roadmap.

Once this connection is made, the completion status within Azure Boards will also be visible within the ProductPlan roadmap. This makes it simple to keep both platforms aligned over time as changes are made.

With ProductPlan’s private sharing feature, a link to the roadmap can be sent to stakeholders or even embedded in Azure Boards. You can share it broadly within the organization without requiring each person to get a ProductPlan license. Plus, everyone will always see the latest version, instead of sending out a slide deck which can get stale and out-of-date very quickly while remaining in circulation indefinitely. This self-service access also means you’ll never be asked for the “latest roadmap” again. Anyone with access can also subscribe to receive roadmap notifications so they’re alerted anytime something changes. This is particularly helpful to keep stakeholders engaged and also saves product managers from having to constantly communicate updates.

Individuals can also click on roadmap items to drill down and see additional details, such as progress made. You can also link back to Azure Boards and see the specific items tied to that roadmap item.

Ready to unlock the power of a beautiful, visual roadmap linked to the robust repository of your product backlog? Learn more about ProductPlan and Azure Boards integration today!

The post Integrate your product roadmap to Azure Boards appeared first on Azure DevOps Blog.

Be a Technology Tourist

$
0
0

Passport Pages by daimoneklund used under CCIn 1997 15% of Americans had Passports. However, even now less than half do. Consider where the US is physically located. It's isolated in a hemisphere with just Canada and Mexico as neighbors. In parts of Europe a 30 minute drive will find three or four languages, while I can't get to Chipotle in 30 minutes where I live.

A friend who got a passport and went overseas at age 40 came back and told me "it was mind-blowing. There's billions of people who will never live here...and don't want to...and that's OK. It was so useful for me to see other people's worlds and learn that."

I could tease my friend for their awakening. I could say a lot of things. But for a moment consider the context of someone geographically isolated learning - being reminded - that someone can and will live their whole life and never need or want to see your world.

Travel of any kind opens eyes.

Now apply this to technology. I'm a Microsoft technologist today but I've done Java and Mainframes at Nike, Pascal and Linux at Intel, and C and C++ in embedded systems as a consultant. It's fortunate that my technology upbringing has been wide-reaching and steeped in diverse and hybrid systems, but that doesn't negate someone else's bubble. But if I'm going to speak on tech then I need to have a wide perspective. I need to visit other (tech) cultures and see how they live.

You may work for Microsoft, Google, or Lil' Debbie Snack Cakes but consider getting a passport - hey, in fact one isn't needed! - and visit other (tech) cultures. Go to their meet-ups, visit their virtual conferences, follow people outside your space, try to build their open source software, learn a foreign (programming) language. They may not want or need to visit yours, but you'll be a better and more well-rounded person when you return home if you're chose to be technology tourist.


Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!



© 2019 Scott Hanselman. All rights reserved.
     

What’s new in Azure DevOps Sprint 159

$
0
0

Sprint 159 has just finished rolling out to all organizations and you can check out all the new features in the release notes. Here are some of the features that you can start using today.

Azure Boards app for Microsoft Teams

We’re excited to announce the new Azure Boards app for Microsoft Teams. The app allows you to monitor work item activity by subscribing to different types of events, including work item created, work item updated, etc., and to get notifications for these events in your Teams channel. You can also use the app to create new work items.

Azure Repos app for Microsoft Teams

We are announcing a new Azure Repos app for Microsoft Teams. With this app, you can monitor your repositories and get notified whenever code is pushed/checked in, pull requests (PRs) are created or updated in your Teams channel. In addition, previews for pull request URLs will help you to initiate discussions around PRs and have contextual and meaningful conversations. 

Mark files as reviewed in a pull request

Sometimes, you need to review pull requests that contain changes to a large number of files and it can be difficult to keep track of which files you have already reviewed. Now you can mark files as reviewed in a pull request. You can mark a file as reviewed by using the drop-down menu next to the file name, or by hovering and clicking on the file name. (Note that this feature is only meant to track your progress as you review a pull request, so the marks will only be visible to the reviewer.)

These are just the tip of the iceberg, and there are plenty more features that we’ve released in Sprint 159. Check out the full list of features for this sprint in the release notes.

The post What’s new in Azure DevOps Sprint 159 appeared first on Azure DevOps Blog.

Announcing the Azure Boards app for Microsoft Teams

$
0
0

Developers today rely on communication platforms like Microsoft Teams extensively to get work done. Often, Microsoft Teams is the place where ideas are discussed, insights are generated and product defects are identified. The same discussions then can continue in Azure Boards where development teams actually plan and manage their work. We are excited to announce the Azure Boards app for Microsoft Teams that brings work in Azure Boards closer to the team channel in Microsoft Teams.

With this app, users can create new work items in Azure Boards through a command or through message actions that convert a conversation in the channel into a work item. Users can also set up and manage subscriptions to get notifications in their channel whenever work items are created or updated. Messaging extension can be used to search and share work items with other members in the channel or previews can be generated from work item URLs to help initiate discussions and keep the conversations contextual.

Create work items

Use message actions to create work item from conversations in the channel

Get notified when a work item is created or updated

Search and share work items using messaging extension

Use work item URLs to initiate discussions around work items

For more details about the app, please take a look at the documentation or go straight ahead and install the app.

Please give the app a try and send us your feedback using the @azure boards feedback command in the app or on Developer Community.

The post Announcing the Azure Boards app for Microsoft Teams appeared first on Azure DevOps Blog.

Advancing industrial IoT capabilities in Azure Time Series Insights

$
0
0

Late last year, we announced the preview of some of the foundational capabilities of our industrial IoT analytics platform with a scalable time series storage for trending decades of data, semantic model support to describe domain-specific metadata, and enhanced analytics APIs and UX. We are building on the power of this analytics platform with additional new capabilities that will add richness and flexibility, and open up new scenarios for our enterprise IoT customers. Today, we are announcing the following new capabilities:

  • Warm and cold analytics support that builds on top of our existing preview and provides retention-based data routing between warm and cold stores. Customers can now perform interactive analytics over warm data as well as gain operational intelligence over decades of historical data stored in a customer-owned Azure Data Lake.
  • A flexible analytics platform that enables attaching a customer-owned Azure Data Lake to Azure Time Series Insights for data archival, thereby allowing customers to have ownership of their IoT data. Customers can connect to and interop across a variety of advanced analytics scenarios such as predictive maintenance and machine learning using familiar technologies including Apache Spark™, Databricks, Jupyter, etc.
  • Rich query APIs and user experience to support interpolation, new scalar and aggregate functions, categorical variables, scatter plots, and time shifting of time series signals for in-depth analysis.
  • Significant scale and performance improvements at all layers of the solution including ingestion, storage, query, and metadata/model to support customers’ IoT solution needs.
  • Azure Time Series Insights Power BI connector that enables customers to take the queries they do in Azure Time Series Insights directly into Power BI to get a unified view of their BI and time series analytics in a single pane of glass.

Azure Time Series Insights continues to provide a scalable pay-as-you-go pricing model enabling customers to tune their usage to suit their business demands and let Azure Time Series Insights analytics platform worry about scaling the infrastructure to meet their growing needs.

A comprehensive analytics platform for Industrial IoT

We released a preview of our first wave of capabilities last year in December. We have since had great customer adoption and feedback that has led us to the preview refresh today.

Our customers span all major industrial IoT segments including manufacturing, automotive, oil and gas, power and utility, smart buildings, and IoT consulting. These customers are telling us that IoT time series analytics is more than just the potential to achieve operational excellence. IoT time series data together with rich contextualization helps them drive dynamic transformation, enabling their businesses to become more agile and data-driven than ever before.

To help maximize the value of time series data and drive this digital revolution, we're updating the Azure Time Series Insights offering to support comprehensive and rich analytics over multi-layered storage, open file format and flexibility to connect to other data services for connected data scenarios, enterprise grade scale and performance, enhanced user experience and SDK support, and out-of-box connectors to data services such as Power BI to enable end-to-end analytics scenarios.

Details of the new features in preview refresh

Comprehensive and rich analytics over multi-layered storage

The majority of industrial IoT customers work with IoT data for a variety of data access scenarios. To satisfy these requirements, Azure Time Series Insights provides scalable multi-layered time series storage for warm and cold data analytics. When a customer provisions Azure Time Series Insights, upon selecting the PAYG pricing option, they can configure Azure Storage as the cold store, as well as enable warm store. Additionally, a customer can choose the retention period (configurable at any time) for the warm store.  Azure Time Series Insights will automatically route ingested data based on the configured retention period to warm store, for example if retention was configured as 30d, as data is being streamed, 30d worth of data is stored in warm store. All data is, by default, routed to customer-owned Azure data lake for purposes of archiving and analytics. Queries done over the configured retention period are always served up from the warm store with no additional input from the user. Queries outside of the retention period are always served up from the cold store. This allows customers to do high volume, interactive, asset-based analytics over warm for monitoring, dashboarding, and troubleshooting scenarios. Customers can continue to do asset-based analytics over decades of cold data stored in their Azure Data Lake for operational intelligence including troubleshooting, golden batch analysis, predictive analytics, etc.

Simple and easy to use configuration for warm and cold stores in the Azure Time Series Insights provisioning experience.

Flexible analytics platform for integrating with first and third party data services

A critical and powerful capability that is unleashed with our cold store is data connectivity to other data solutions for end-to-end scenario coverage. As mentioned earlier, the cold store is a customer-owned Azure Data Lake and is the source of truth for all their IoT data and metadata. Data is stored in an open source, Apache Parquet format for efficient data compression, space, query efficiency, and portability.

Azure Time Series Insights will provide out-of-box connectors for popular and familiar data services that our customers use, for example Apache Spark™ or Databricks for machine learning, and predictive analytics. This is a work in progress and will become available to customers shortly.

As part of this preview refresh, we are releasing the Azure Time Series Insights Power BI connector. This feature is available in the Azure Time Series Insights Explorer user experience through the ‘Export’ option, allowing customers to export the time series queries they create in our user experience directly into the Power BI desktop and view their time series charts alongside other BI analytics. This opens the door to a new class of scenarios for industrial IoT enterprises who have invested in Power BI. It provides a single pane of glass over analytics from various data sources including IoT time series, thereby unlocking significant business and operational intelligence.

Enhanced asset-based analytics API and user experience

Since our preview launch in December last year, we have worked with a number of key IoT enterprise customers to prioritize the set of requirements around query and user experience. The result is the following new capabilities we are announcing as part of our preview refresh today:

  • Interpolation to reconstruct time series signals from existing data
  • Discrete signal processing with categorical variables
  • Trigonometric functions
  • Scatter plots
  • Time shifting time series signals to understand data patterns
  • Model API improvements for hierarchy traversal, time series search, auto-complete, paths, and facets
  • Improved search and navigation efficiency and continuation token to support query at scale
  • Improved charting capabilities including support for step interpolation, minimum or maximum shadows, etc.​
  • Updated model authoring and editing experience
  • Increased query concurrency to support up to 30 concurrent queries

We have a number of new capabilities coming in this space including support for time weighted averages, additional scalar and aggregate functions, dashboards, etc. over the coming months.

Enhanced analytics experience over warm and cold data with query support for continuous as well as discrete time series.

Azure Time Series Insights is committed to our customers’ success

We look forward to continuing to deliver on our commitment of simplifying IoT for our customers and empowering them to achieve more with their IoT data and solutions. For more information, please visit the Azure Time Series Insights product page and documentation. Also, try out the quickstart to begin using Azure Time Series Insights today.

Please provide feedback and suggestions on how we can improve the product and documentation by scrolling down to the bottom of each documentation page, where you can find a button for “product feedback” or sign in to your GitHub account and provide feedback. We value your input and would love to hear from you.

Building retail solutions with Azure IoT Central

$
0
0

Azure IoT Central is an IoT app platform for solution builders that simplifies the challenges of building and deploying scalable and affordable enterprise grade IoT applications. Across the retail industry, the use of connected devices to deliver business performance continues to grow in popularity. New solutions are accelerating business model transformation by connecting manufacturers, supply chains, warehouses, and store shelves to owners, operators, and customers in exciting new ways. Today we’ll discuss our stance on IoT in the retail industry, as well as tell you about just a few of our partners building incredible solutions on Azure IoT Central.

Based on the recent IoT Signals report, our survey of over 3,000 enterprise decision makers across the world, the Retail industry has the highest adoption rate of IoT related solutions at 90 percent, which is higher than Manufacturing, Transportation, Healthcare, or Government. Right now, retail and wholesale companies see top use cases for IoT within their supply chains (64 percent) and inventory optimization (59 percent) and of course leaders across all industries have concerns about security. Yet, we know that retailers and companies along the value chain have a long way to go before reaping all the benefits that IoT will provide.

Updates to IoT Central

Today, we announced updates to Azure IoT Central to help solution builders move beyond proof of concept to building business-critical applications they can brand and sell directly or through Microsoft AppSource. IoT Central can help retail solution builders accelerate development, enabling them to get connected, stay connected and transform their business by managing IoT solutions that deliver IoT data and insights to business applications where decisions are made. For more information, please see our IoT Cental Blog.

We are supporting retail specific solution builders with five IoT Central retail app templates for builders to brand, customize, and make their own apps using extensibility via APIs, data connectors to business applications, repeatability and manageability of their investment through multitenancy, and seamless device connectivity. Get started today with any app template for free and starter materials like a sample operator dashboard and simulated devices to show you what’s possible. In early 2020 updated pricing will help with predictability as you sell your solutions directly to customers or through Microsoft AppSource.

When you are ready to get to customizing and extending, take a look at our rich documentation set, which augments the journey with overviews, tutorials, how-to’s, and industry relevant concepts.

INfographic of the five app templates.

Figure 1, Your brand -- Your SaaS - Customize and extend one of these 5 retail app templates to make them your own

Innovative retail partners building their SaaS with IoT Central

Established industry leaders across the retail ecosystem are optimizing omnichannel solutions with IoT Central; delivering IoT insights and actions from the beginning of the supply chain through distribution, warehousing, and into the hands of consumers through storefront or delivery. Learn about what QuEST Global, C.H. Robinson, Dynamics 365 Connected Store, and Footmarks Inc. are doing today.

Digital distribution center solution from Lenovo

In July, Lenovo introduced Lenovo Digital Distribution Center (built with IoT Central) and discussed many of the challenges faced by distribution centers globally, including staffing surges during peak times, labor costs, space constraints, and overall productivity.

An infographic showing the Digital Distribution workflow.

Figure 2, illustration of the digital distribution workflow

A diagram of the Digitial DIstribution Center by Lenovo

Figure 3, Architecture diagram of Digital Distribution Center by Lenovo

Today we’ll introduce three more solution builders developing solutions across connected logistics, store analytics, and smart inventory management.

Connected logistics solutions from C.H. Robinson and QuEST Global

The challenges facing global logistics and fleet management continue to grow as more retailers move to just-in-time shipping and warehousing. With the holiday shopping (and shipping) season fast approaching, global shipping and freight transportation provider, C.H. Robinson, is putting IoT Central to work during its busiest time of the year. Intel intelligent gateways and IoT tags managed by IoT Central bring new data and insights into industry leading Navisphere Vision. Jordan Kass, President of TMC, a division of CH Robinson responsible for Navisphere told us; “Navisphere Vision provides global shippers supply chain visibility and a predictive analytics platform. To speed up our deployment, increase our capabilities, and evolve for the future, we are partnering and building new device connections with Azure IoT Central to empower one robust agnostic connection that allows for infinite scalability and speed. This enables us to further optimize and deliver better outcomes—such as improved savings, reliability, and visibility—during these high-stakes holiday shipping months.”

A screenshot of an example Navishphere Vision dashboard for IoT device insights

    Figure 4, an example of a Navishphere Vision dashboard for IoT device insights tracking temperature and humidity levels in shipping containers.  

A diagram for Connected Logistics solution by C.H. RobinsonFigure 5, Architecture diagram for Nacisphere Vision by C.H. Robinson

Road safety is a global issue affecting billions of people around the world. QuEST Global's fleet management solution Fleet Tracker aims to reduce roadside issues, using CalAMP OBD2 dongles to deliver real-time location, driving pattern, speed, engine health, and geo fencing; simultaneously managing vehicles nearly anywhere in the world. Maxence Cacheux, Head of Strategic Partnerships from Quest Global told us, “We are delighted with the successful deployment of our fleet management solution built on IoT Central, which enhanced its speed, security, and scalability. Now we are planning for the future when our customers around the world have tens of thousands of connected devices, delivering business transforming insights and actions.”

A screenshot of an example dashboard from QuEST Global’s Fleet Tracker solution,Figure 6, an example of a dashboard from QuEST Global’s Fleet Tracker solution, deliveirng insights from connected vehicles

A diagram for Fleet Tracker by QuEST GlobalFigure 7, Architecture diagram for Fleet Tracker by QuEST Global

 

Store analytics from Dynamics 365

Dynamics 365 Connected Store empowers retailers with real-time observational data to improve in-store performance. From customer movement to the status of products and store devices, Dynamics 365 Connected Store will provide a better understanding of your retail space. Built with IoT Central, Dynamics 365 Connected Store empowers store managers and employees to provide optimal shopping experiences through triggered alerts based on real-time data from video cameras and IoT sensors. This new workflow can significantly improve in-store performance by protecting inventory, increasing profitability, and optimizing the shopping experience in real time.

A screenshot of an example Dynamics 365 Connected Store dashboard

Figure 8, An example of Dynamics 365 Connected Store dashboard enabling retail staff to visualize the flow of traffic throughout their grocery store using optical IoT sensors

A diagram of Dynamics 365 Connected Store

Figure 9, Architecture diagram of Dynamics 365 Connected Store

Smart Inventory Management from Footmarks Inc.

Consumer packaged goods (CPG) manufacturers share many of the same challenges, one of them is getting hundreds or sometimes thousands of custom assets like displays to the correct retail locations, and keeping them in the store for the right amount of time.

When displays don’t reach their pre-determined locations, retailers experience significant loss in sales, a key impact for brands during important buying times. Around the country today, we know a significant portion of point-of-purchase (POP) display programs are not compliant, a problem that Footmarks Inc. is looking to solve through their Smart Tracking app built with IoT Cental, an asset tracking application that delivers previously unavailable insights to CPG’s.

CPG’s can now track the location of their POP assets throughout the entire supply chain and into store execution. Gone are the days of mystery shopping and expensive store visits to get details on your assets.  Shawn Englund, Footmarks Inc.’s CEO is enthusiastic for the future saying,“We are excited to be working with some of the world’s largest CPGs to solve the age-old issue of merchandizing compliance. By adding Azure IoT Central we are able to gain even more insights throughout our CPG partner supply chains and provide actionable insights throughout each of their campaigns.”

A screen shot of an example Footmarks dashboard

Figure 10, An example of a Footmarks dashboard showing POP asset tracking along the supply chain.

A diagram of Smart Tracking by Footmarks Inc

Figure 11, Architecture diagram of Smart Tracking by Footmarks Inc.

Getting started

We are at the beginning of an incredible revolution that connects strategy, tools, and devices, and empowers retailers to turn insights into actions. Retail companies around the world are using IoT today to reinvent how they connect to customers, empower employees with the right information, deliver an intelligent supply chain, and informing new business models as individuals and organizations continue to connect billions of new devices to business applications. Here is how you can get started.

1. Start building today on IoT Central

2. Browse the growing list of retail applications in AppSource, devices in the Azure IoT Certified Device Catalog, or contact any of the solution builders discussed today:

3. Connect with us at Microsoft Ignite, November 4-8

4. Visit us at the world’s largest Retail trade show, NRF in New York City where you can speak with experts and get hands on, January 12-14

Azure IoT Central: Democratizing IoT for all solution builders

$
0
0

For the last five years, our industry has buzzed with the promises of IoT. IoT has evolved from being a next-horizon term, to a common vernacular employed across industry conversations. In fact, earlier this year we surveyed 3,000 enterprise decision makers across the world and learned that 85% have developed at least one IoT project. Across four major industries (manufacturing, retail, healthcare, and government), more than 84% of enterprise decision makers consider IoT “critical” to success (read the full report here).

Despite this near consensus, the average maturity of production-level IoT projects remains extremely low. Over 90% of companies experience some failure in the proof of concept stage due to concerns and knowledge gaps around how to scale their solutions securely, reliably, and affordably. This finding is not surprising. Scaling a project not only increases the cost, it also introduces significant technical complexity—from knowing how to adapt an architecture as the number of connected devices grows to millions, to ensuring your security remains robust as your breachable footprint expands.

Having worked with thousands of IoT customers, our engineers have encountered these issues time and time again. We used these learnings to evolve Azure IoT Central and help solution builders avoid common pitfalls that prevent many projects from moving beyond the proof of concept stage. We explained these findings in our new report, “8 attributes of successful IoT solutions,” to help IoT solution builders ask the right questions upfront as they design their systems, and help them select the right technology platforms.

A fully managed IoT app platform

Azure IoT Central is our IoT app platform for solution builders (such as ISVs, SIs, and Azure IoT Central helps you connect your devices, manage devices and generate insights and bring insights into your business applications. OEMs), to design, deploy, and manage enterprise-grade solutions that they can brand and sell to their customers, either directly or through Microsoft AppSource.

Azure IoT Central provides a complete and robust platform that handles the “plumbing” of IoT solutions. It is by no means an end-to-end solution. The value of IoT Central is brought to life when solution builders leverage it to connect and manage their devices, as well as to extend device insights into their line of business applications. This allows solution builders to spend their time and energy in their area of expertise, transforming their businesses through value-adding and brand-differentiating elements. With whitelabeling, solution builders can go to market with a resulting solution that reflects their brand. While many customers choose to design and build cloud solutions using individual Azure services (a Platform as a Service, or PaaS, approach), Azure IoT Central reduces the cost of building and maintaining a PaaS-based solution by providing a fully managed platform.

Today we’re announcing several major updates to Azure IoT Central. We’re confident these updates will inspire builders to develop industry-leading solutions with the peace of mind that their applications rest on a secure, reliable, and scalable infrastructure–enabling them to connect and manage devices, generate insights, and bring those new insights into their existing applications.

 

New app templates for industry-focused enablement

Today we are releasing 11 new industry app templates, designed to illustrate the types of solutions our partners and customers can build across retail, healthcare, government, and energy.  

Azure IoT Central now has eleven new industry-specific application templates for solutions builders to get started building applications across retail, energy, healthcare and government.

Innovative partners using Azure IoT across industries

From startups to established leaders, we are seeing solution builders across industries leverage Azure IoT Central to transform their industries.

One area where we're seeing solution builders use Azure IoT Central to design innovative solutions is healthcare. Every 20 seconds a limb is lost to diabetes. To tackle this issue, Seattle-based startup, Sensoria Health joined forces with leading orthopedic medical footwear manufacturer, Optima Molliter, to launch an IoT solution that enables continuous, remote monitoring of patients recovering from diabetic foot ulcers. Patients and physicians can leverage the Sensoria Core solution, a hub utilizing IoT and artificial intelligence (AI) based on telemetry from Optima footwear, to monitor real-time patient adherence to clinician recommendations via a mobile app.

Physicians can leverage the clinician dashboard, which provides a holistic view of their patient population, to manage patient interactions, understand patient adherence to recommendations over time, and to decide which patients are in most need of care at a given moment. By enabling real-time alerts, physicians can manage care escalation decisions to expedite the healing of foot wounds and reduce the risk of amputations. Azure IoT Central provided the IoT application infrastructure that allowed Sensoria to quickly build a globally available, secure, and scalable IoT solution. Furthermore, Azure IoT Central leverages Azure API for FHIR, enabling Sensoria Health to ensure healthcare interoperability and compliance standards are met when managing the health data provided by EMR systems and from Sensoria Core embedded microelectronic devices. Read the press release to learn more.

We're also seeing well-established solution builders like C.H. Robinson, an American Fortune 500 provider of multimodal transportation services and third-party logistics, are taking advantage of IoT Central. Using Intel intelligent gateways and IoT tags managed by Azure IoT Central, C.H. Robinson has quickly integrated IoT data and insights into its industry-leading Navisphere Vision product. The Navisphere solution is being used by leading retailers including Microsoft’s own supply chain teams to optimize logistics and costs as we prepare to deliver Surface and Xbox products ahead of the holiday season. Jordan Kass, President of TMC, a division of C.H. Robinson responsible for Navisphere described the challenges facing the industry; “Today, Retailers of all sizes need to know where their products are and where they are going … . Building with IoT Central offered us speed, scale, and simplicity to connect devices like Intel’s gateways and IoT tags.”

Vattenfall, a Swedish energy company investing deeply in renewable energy, and Microsoft are collaborating on solutions using Azure IoT Central to address challenges in energy markets to match supply and demand for renewable energy. “The IoT Central app platform has expedited our product development, providing fast and seamless device connectivity and device management, and built-in data storage, rule creation, and dashboarding for our operators,” says, Sebastian Waldenström, Head of Vattenfall’s IoT and Energy Management.

While many of our partners have established industry expertise within verticals, we’ve also seen IoT Central be used by solution builders with horizontal reach, such as Mesh Systems. Mesh Systems is a global expert in asset-tracking solutions, with customer applications spanning retail, logistics, banking, pest control, construction, and much more. “IoT Central helps us do what we do best–only now what used to take 3 months to build now takes 3 days” said Doyle Baxter, strategic alliance manager at Mesh Systems.

These partners and others are on a journey building with IoT Central. Read more about building retail solutions with IoT Central here, and follow along in the coming months as we feature more partner success across other industry segments.

New capabilities for production-level solutions

Expanding IoT Central portfolio with IoT Edge: Businesses can now run cloud intelligence directly on IoT devices at the edge managed by Azure IoT Central. This new feature helps businesses connect and manage Azure IoT Edge devices, deploy edge software modules, publish insights, and take actions at-scale–all from within Azure IoT Central.

Seamless device connectivity with IoT Plug and Play: Solution builders building with Azure IoT Central can select from a range of Azure IoT Pre-Certified Plug and Play devices and quickly connect them to the cloud. Customers can now build production grade IoT solutions within days without having to write a single line of device code, drastically cutting down the time to market and costs.

Range of actions within the platform: Azure IoT Central exposes various levels of extensibility from within the platform. A user can define rules on device data streams that trigger no-code (Microsoft Flow) or low-code actions (Azure Logic Apps). A solution builder could also configure more complex actions, exchanging data with an external service via a Webhook or Azure Functions based action.

Extensibility through data export: Continuous Data Export from Azure IoT Central can be used to integrate data streams directly into Azure PaaS services like Azure Blob Storage for data retention, or Azure Event Hub and Azure Service Bus for building rich processing pipelines for IoT data and insights into business applications, or into storage for Azure Machine Learning.

Public APIs to access features: Solution builders with extensibility needs beyond device data now have access to Central features through our public APIs. Users can develop robust IoT solutions that leverage IoT Central programmatically as the core for device modelling, provisioning, lifecycle management, operations (updating and commanding), and data querying.

Application repeatability: Today, solution builders can use application templates to export their investments and duplicate them for new customers, saving hours of time on configuration and customization.

Manageability and scale through multitenancy: We know that many solution builders need more than just repeatability; they also need manageability to truly scale their investments to customers. Which is why in the coming months, Azure IoT Central will support multitenancy; solution builders can build once and use a multitenancy interface to on-board, configure, and update many customers and organizations globally across regions, offering both device and data sovereignty without sacrificing manageability.

User access control through custom user roles: Organizational complexity varies across customer solution implementations. Custom user roles allow for clearly defined access control to the data as well as actions and configurations within the system. It gives users control over exactly what they need and nothing more.

Device and data scale: Azure IoT Central scales users' data processing pipelines and provides storage to support millions of devices. Solution builders can achieve device scale by seamlessly connecting devices with IoT Plug and Play integration and authoring IoT Central experiences for Plug and Play devices.

Pricing update: In early 2020, we're unveiling a new pricing tier that will make scaling solutions more affordable and will provide more flexibility for solution builders. IoT Central customers will soon be able to select between multiple pricing plans based on their specific message volume needs for their projects. Check back on our pricing page in the coming weeks for more details.

Azure IoT Central: your IoT application platform

Microsoft is investing $5 billion in Azure IoT over the next four years. Our goal is to simplify the journey in IoT, allowing solution builders to bring solutions to market faster, while staying focused on digital transformation.

Azure IoT Central offers a powerful example of how Microsoft continues to deliver on this commitment. By removing the complexity and overhead of setup, management burden, and operational costs, we can accelerate the creation of innovative solutions across all industries. Azure IoT Central provides organizations with the IoT application platform they need to create the next wave of innovation in IoT. And that means a more intelligent and connected world that empowers people and organizations to achieve more. To learn more, visit our product page.

Customer success stories with Azure Backup: Metori Capital Management

$
0
0

As a part of our continued customer success story series, Metori Capital, an asset management company based in Paris, shared their Azure Backup story on how they have secured their data and assets using Azure Backup for years.

Customer background

Metori Capital is a pure play asset management company specialized in systematic quantitative strategies with approximately $500M of assets under their management. Their day-to-day business is to collect market data and run proprietary quantitative models to identify an optimal allocation of wealth across a wide variety of financial instruments. They then send hundreds of orders to the main futures markets across the world. In parallel, they monitor risks, reconciliations of thousands of trades, and conduct valuation of the funds they manage.

Metori Capital relies on sophisticated technology and proprietary algorithms to help it buy, sell, and manage assets in Epsilon. It needed to build an enterprise platform that could support these algorithms, meet compliance, and safeguard data. To keep pace with its goals, Metori also wanted the benefits of scalability and reliable business continuity.

Azure Backup

Metori Capital, hosting a large part of their data in virtual machines, relies completely on Azure Backup for data protection and resiliency in a state of disaster. Their business demands guaranteed operational efficiency and scalability, and the company has all the processes fully automated and streamlined.

"For all those reasons, from our inception in 2016, we decided to have our production environment 100 percent Azure hosted. Also in our business, continuity of operations is an absolute prerequisite that imposes us to demonstrate that we might be only marginally affected by any hardware, network or server outage," — Loïc Guenin, Global head of Front Office Technologies at Metori Capital.

Azure Backup promises data availability even in situations of disaster and is what helped Metori Capital in adverse situations. The periodic business continuity and disaster recovery drills ensured security compliance at every step.

"This is where Azure Backup is an important service to us: We regularly conduct “fire drill” exercises against our IT infrastructure to assess our resiliency to different kinds of disrupting events. In the context of such exercises, thanks to Azure Backup, we proved that we could deploy a working clone of our production environment on a different infrastructure, in a matter of hours," — Loïc Guenin.

Summary

Metori Capital believes that Azure Backup provides a great return on investment by getting simple, secure, and cost-effective protection for business-critical workloads. Azure also provides Metori with enterprise capabilities it needs to stay connected to markets and investors, meet multiple compliance requirements, retain and safeguard data, and ensure business continuity - all while only paying for the services it consumes.

“Without Azure, it might have cost us 10 times more to build an environment that could cope with the future we expect,” — Nicolas Gaussel, CEO of Metori Capital Management.

Azure Backup provides a complete and dependable solution for Metori Capital’s critical workloads. It also provides them with the agility to backup their ever-growing data to Azure.

Learn more about Azure Backup on Azure.com

Attending Ignite? Add the session, Microsoft Azure Backup: Deep dive into Azure’s built-in data protection, to your schedule builder and attend on Thursday, November 7th, from 11:45am – 12:30pm.

Rain or shine: Azure Maps Weather Services will bring insights to your enterprise

$
0
0

Weather: the bane of many motorists, transporters, agriculturalists, retailers, or just about anyone who has to deal with it—which is all of us. That said, we can embrace weather and use weather data to our advantage by integrating it into our daily lives.

Azure Maps is proud to share the preview of a new set of Weather Services for Azure customers to integrate into their applications. Azure Maps is also proud to announce a partnership with AccuWeather—the leading weather service provider, recognized and documented as the most accurate source of weather forecasts and warnings in the world. Azure Maps Weather Services adds a new layer of real-time, location-aware information to our portfolio of native Azure geospatial services powering Microsoft enterprise customer applications.

“AccuWeather’s partnership with Microsoft gives all Azure Maps customers the ability to easily use and integrate authentic and highly accurate weather-based location intelligence and routing into their applications. This is a game-changer.”, says Dr. Joel N. Myers, AccuWeather Founder and CEO. “We are delighted with this collaboration with Microsoft as it will open up new opportunities for organizations–large and small–to benefit from our superior weather data based on their unique needs.”

The power of Azure Maps Weather Services

A global view of an Azure Map with new weather services information layered atop of it.

Bringing Weather Services to Azure Maps means customers now have a simple means of integrating highly dynamic, real-time weather data and visualizations into their applications. There are a multitude of scenarios that require global weather information for enterprise applications. For motorists, we can pick up our phones, or ask a smart speaker about the weather. Our cars can determine the best path for us based on traffic, weather, and personal timing considerations.

Transportation companies can now feed weather information into dynamic routing algorithms to determine the best route conditions for their respective loads. Agriculturalists can have their smart sprinkler systems running connected edge computing informed of incoming rain, saving crops from overwatering, and conserving the delicate resource that is water. Retailers can use predicted weather information to determine the need for high-volume goods, optimizing supply chain.

Did you know that most electrical vehicle batteries lose a percentage of their charge when the temperature dips below freezing? With Azure Maps Weather Services, you can use current or forecasted temperatures to determine your vehicle’s range. Range can determine how far a car can drive along a route, set better expectatoins for estimated arrival times, determine if charging stations are close by, or find hotels that are reachable based on this reduction in battery life. Freezing temperatures also increase the duration a battery will take to charge—meaning more time spent at the charging station.

Having the insight of temperature drops at charging stations means having the ability to calculate the length of time a driver will spend at a charging station; which, in turn, allows charging station owners to recalculate productivity metrics for their respective stations based on weather conditions.

An Azure Maps rendering of Montana with a weather radar overlay atop of it.

Azure Maps Weather Services in preview

Azure Maps Weather Services are available as a preview with the following capabilities:

  • Weather Tile API: Fetches radar and infrared raster weather tiles formatted to be integrated into the Azure Maps SDKs. By default, Azure Maps uses vector map tiles for its web SDK (see Zoom Levels and Tile Grid). Use of the Azure Maps SDK is not required and developers are free to integrate the Azure Maps Weather Services into their own Azure Maps applications as needed.
  • Current Conditions: Returns detailed current weather conditions such as precipitation, temperature, and wind for a given coordinate location. By default, the most current weather conditions will be returned. Observations from the past 6 or 24 hours for a particular location can be retrieved.
  • Minute Forecast: Request minute-by-minute forecasts for a given location for the next 120 minutes. Users can request weather forecast in the interval of 1, 5 and 15 minutes. The response will include details such as the type of precipitation (including rain, snow, or a mixture of both), start time, and precipitation intensity value.
  • Hourly Forecast: Request detailed weather forecast by hour for the next 1, 12, 24 (1 day), 72 (3 days), 120 (5 days), and 240 hours (10 days) for the given the given coordinate location. The API returns details such as temperature, humidity, wind, precipitation, and ultraviolet (UV) index.
  • Quarter-Day Forecast: Request detailed weather forecast by quarter-day for the next 1, 5, 10, or 15 days for a given location. Response data is presented by quarters of the day—morning, afternoon, evening, and overnight. Details such as temperature, humidity, wind, precipitation, and UV index are returned.
  • Daily Forecast: Returns detailed weather forecast such as temperature, humidity, wind by day for the next 1, 5, 10, 15, 25, or 45 days for a given coordinate location. The API returns details such as temperature, humidity, wind, precipitation, and UV index.
  • Weather Along Route: Weather along a route API returns hyperlocal (one kilometer or less), up-to-the-minute weather nowcasts, weather hazard assessments, and notifications along a route described as a sequence of waypoints. This includes a list of weather hazards affecting the waypoint or route, and the aggregated hazard index for each waypoint might be used to paint each portion of a route according to how safe it is for the driver. Data is updated every five minutes. The service supplements Azure Maps Route Service that allows you to first request a route between an origin and a destination and use that as an input for Weather Along Route endpoint.

Using the Azure Maps Weather Service along a calculated route (using Azure Maps Route Service), customers can generate weather notifications for waypoints that experience an increase in intensity of a weather hazard. If the vehicle is expected to begin experiencing heavy rain as it reaches a waypoint, a weather notificationwill be generated, allowing the end product to display a heavy rain notification before the driver reaches that waypoint. The trigger for when to display the notification for a waypoint is left up to the product developer and could be based, for example, on a fixed geometry (geofence), or selectable distance to the waypoint.

Azure Maps services are designed to be used in combination with one another to build rich, geospatial applications and insights as part of your Azure Maps account. Azure Maps Weather Service is a new pillar of intelligence added to Azure Maps Location Based Services, Azure Maps Mobility Services, and Azure Maps Spatial Operations, all actuated via the Azure Maps Web, Android SDKs and REST endpoints.

These new weather services are available to all Azure customers, includeing both pay-as-you-go or enterprise agreements. Simply navigate to the Azure Portal, create your Azure Maps account and start using the Azure Maps Weather Service.

We want to hear from you

We are always working to grow and improve the Azure Maps platform and want to hear from you! We’re here to help and want to make sure you get the most out of the Azure Maps platform.

  • Have a feature request? Add it or vote up the request on our feedback site.
  • Having an issue getting your code to work? Have a topic you would like us to cover on the Azure blog? Ask us on the Azure Maps forums.
  • Looking for code samples or wrote a great one you want to share? Join us on GitHub.
  • To learn more, read the Azure Maps documentation.

Preview: Server-side encryption with customer-managed keys for Azure Managed Disks

$
0
0

Today we’re introducing the preview for server-side encryption (SSE) with customer-managed keys (CMK) for Azure Managed Disks. Azure customers already benefit from server-side encryption with platform managed keys (PMK) for Azure Managed Disks enabled by default. Customers also benefit from Azure disk encryption (ADE) that leverages the BitLocker feature of Windows and the DM-Crypt feature of Linux to encrypt Managed Disks with customer managed keys within the guest virtual machine.

Server-side encryption with customer-managed keys improves on platform managed keys by giving you control of the encryption keys to meet your compliance needs. It improves on Azure disk encryption by enabling you to use any OS types and images for your virtual machines by encrypting data in the storage service. Server-side encryption with customer-managed keys is integrated with Azure Key Vault (AKV) that provides highly available and scalable, secure storage for RSA cryptographic keys backed by hardware security modules (HSMs). You can either import your RSA keys to Azure Key Vault or generate new RSA keys in Azure Key Vault.

Azure Storage handles the encryption and decryption in a fully transparent fashion using envelope encryption. It encrypts data using an Advanced Encryption Standard (AES) 256 based data encryption key which is in turn protected using your keys stored in Azure Key Vault. You have full control of your keys, and Azure Managed Disks uses system-assigned managed identity in your Azure Active Directory for accessing keys in Azure Key Vault. A user with required permissions in Azure Key Vault must first grant permissions before Azure Managed Disks can access the keys. You can prevent Azure Managed Disks from accessing your keys by either disabling your keys or by revoking access controls for your keys. Moreover, you can track the key usage through Azure Key Vault monitoring to ensure that only Azure Managed Disks or other trusted Azure services are accessing your keys.

To enable customer-managed keys for Azure Managed Disks, you must first create an instance of a new resource type called DiskEncryptionSet, which represents a customer-managed key. You must associate your disks, snapshots, and images with a DiskEncryptionSet to encrypt them with customer-managed keys. There is no restriction on the number of resources that can be associated with the same DiskEncryptionSet.

SSE with CMK setup.

Availability

Server-side encryption customer-managed keys are available for Standard HDD, Standard SSD, and Premium SSD Managed Disks. You can now perform the following operations in the West Central US region via Azure Compute Rest API version 2019-07-01:

  • Create a virtual machine from an Azure Marketplace image with OS disk encrypted with server-side encryption with customer-managed keys.
  • Create a custom image encrypted with server-side encryption with customer-managed keys.
  • Create a virtual machine from a custom image with OS disk encrypted with server-side encryption with customer-managed keys.
  • Create data disks encrypted with server-side encryption with customer-managed keys.
  • Create snapshots encrypted with server-side encryption with customer-managed keys.

We’re going to add support for Azure SDKs and other regions soon.

Getting Started

Please email AzureDisks@microsoft.com to get access to the preview.

Review the server-side encryption with customer-managed keys for Managed Disks preview documentation to learn how to do the following:

  • Create a virtual machine from an Azure marketplace image with disks encrypted with server-side encryption with customer-managed keys
  • Create a virtual machine from a custom image with disks encrypted with server-side encryption with customer-managed keys
  • Create an empty managed disk encrypted with server-side encryption with customer-managed keys and attach it to a virtual machine
  • Create a new custom image encrypted with server-side encryption with customer-managed keys from a virtual machine with disks encrypted with server-side encryption with customer-managed keys.

Customize networking for DR drills: Azure Site Recovery

$
0
0

One of the most important features of a disaster recovery tool is failover readiness. Administrators ensure this by watching out for health signals from the product. Some also choose to set up their own monitoring solutions to track readiness. End to end testing is conducted using disaster recovery (DR) drills every three to six months. Azure Site Recovery offers this capability for replicated items and customers rely heavily on test failovers or planned failovers to ensure that the applications work as expected. With Azure Site Recovery, customers are encouraged to use non-production network for test failover so that IP addresses and networking components are available in the target production network in case of an actual disaster. Even with non-production network, the drill should be the exact replica of the actual failover.

Until now, it has been close to being the replica. The networking configurations for test failover did not entirely match the failover settings. Choice of subnet, network security group, internal load balancer, and public IP address per network interfacing controller (NIC) could not be made. This means that customer had to ensure a particular alphabetical naming convention of subnets in test failover network to ensure the replicated items are failed over as intended. This requirement conflicted with the organizations that enforce naming conventions for Azure resources. Also in case you wished to attach networking components, it was only possible manually post test failover operation. Further, if a customer tests the failover of an entire application via recovery plan, the Azure virtual network selection was applied to all the virtual machines irrespective of the application tier.

Test failover settings for networking resources

DR administrators of Azure Site Recovery now have a highly configurable setup for such operational activities. The network settings required for test failover are available for every replicated item. These settings are optional. If you skip it, old behavior will be applied where you can select the Azure virtual network at the time of triggering test failover.

A screen shot of the Compute and Networks settings page.

You can go to Compute and Network blade and choose a test failover network. You can further update all the networking settings for each NIC. Only those settings can be updated that were configured on source at the time of enabling replication. The settings only allow you to choose a networking resource which is already created in the target location. Azure Site Recovery does not replicate the changes on networking resources at source. Read the full guidance on networking customization in Azure Site Recovery documentation.

A screen shot of the Network interface properties page.

At the time of initiating test failover via replicated item blade, you will no longer see the dropdown to choose Azure virtual network if the settings are pre-configured. If you initiate test failover via recovery plan, you will still see the dropdown to choose Vnet. However, the Vnet will be applied only to those machines that do not have settings pre-configured.

These settings are only available for Azure machines that are protected by Azure Site Recovery. Test failover settings for VMware and physical machines will be available in a couple of milestones.

Azure natively provides you the high availability and reliability for your mission-critical workloads, and you can choose to improve your protection and meet compliance requirements using the disaster recovery provided by Azure Site Recovery. Getting started with Azure Site Recovery is easy, check out pricing information, and sign up for a free Microsoft Azure trial. You can also visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers.

Related links and additional content

New learning resources for building ASP.NET Core apps using Visual Studio for Mac

$
0
0

Visual Studio for Mac has support for the latest C# 8 IntelliSense and code refactoring capabilities across C#, HTML, JS, CSS and Razor files. Along with NuGet package management, source control, debugging and publishing features, Visual Studio for Mac provides the best experience for building .NET Core and ASP.NET Core apps on the Mac.

Getting started with ASP.NET Core in Visual Studio for Mac

If you’re just getting started with ASP.NET Core and Visual Studio for Mac, take your first steps with the Getting started with ASP.NET Core in Visual Studio for Mac tutorial. This tutorial will first walk you through installing Visual Studio for Mac with a one-click setting that will include all tools required for .NET Core and ASP.NET Core development before showing you how to create your first “Hello world!” ASP.NET Core website.

ASP.NET Core Beginners Workshop

In this step-by-step ASP.NET Core Beginners workshop, you will learn the basics of building a simple ASP.NET Core web app that uses Razor pages. This tutorial consists of the following four modules and includes creating the project, adding an Entity Framework model, working with a database and much more:

  1. Creating a Razor Page project
  2. Adding a model to an ASP.NET Core Razor Pages app
  3. Updating the generated pages
  4. Adding search to a Razor Pages app

eShopOnWeb tutorial

If you’re already familiar with ASP.NET Core and are looking for a more realistic sample application, we have you covered with the eShopOnWeb sample application. eShopOnWeb is a sample application demonstrating a layered application architecture with a monolithic deployment model. Head over to extending an existing ASP.NET Core web application for the video tutorials and step-by-step guides for Visual Studio for Mac. This tutorial consists of the following modules:

  1. Getting started with eShopOnWeb
  2. Working with the eShopOnWeb solution
  3. Adding Docker and running it locally
  4. Deploying to Azure App Services

And if you don’t have an Azure account to work on this tutorial, you can get one totally free here! This also comes with over $200 free Azure credits to spend as you see fit.

Additionally, there is also a fantastic 130 page PDF ebook available which goes over all the details of this full sample application that you can download from GitHub.

Thanks for trying out these tutorials and Visual Studio for Mac. If you have any feedback or suggestions, please leave them in the comments below. You can also reach out to us on Twitter at @VisualStudioMac. For any issues that you run into when using Visual Studio for Mac, please Report a Problem.

The post New learning resources for building ASP.NET Core apps using Visual Studio for Mac appeared first on Visual Studio Blog.

Automated machine learning and MLOps with Azure Machine Learning

$
0
0

Azure Machine Learning is the center for all things machine learning on Azure, be it creating new models, deploying models, managing a model repository, or automating the entire CI/CD pipeline for machine learning. We recently made some amazing announcements on Azure Machine Learning, and in this post, I’m taking a closer look at two of the most compelling capabilities that your business should consider while choosing the machine learning platform.

Before we get to the capabilities, let’s get to know the basics of Azure Machine Learning.

What is Azure Machine Learning?

Azure Machine Learning is a managed collection of cloud services, relevant to machine learning, offered in the form of a workspace and a software development kit (SDK). It is designed to improve the productivity of:

  • Data scientists who build, train and deploy machine learning models at scale
  • ML engineers who manage, track and automate the machine learning pipelines

Azure Machine Learning comprises of the following components:

  • An SDK that plugs into any Python-based IDE, notebook or CLI
  • A compute environment that offers both scale up and scale out capabilities with the flexibility of auto-scaling and the agility of CPU or GPU based infrastructure for training
  • A centralized model registry to help keep track of models and experiments, irrespective of where and how they are created
  • Managed container service integrations with Azure Container Instance, Azure Kubernetes Service and Azure IoT Hub for containerized deployment of models to the cloud and the IoT edge
  • Monitoring service that helps tracks metrics from models that are registered and deployed via Machine Learning

Let us introduce you to Machine Learning with the help of this video where Chris Lauren from the Azure Machine Learning team showcases and demonstrates it.

The thumbnail image for a video

As you see in the video, Azure Machine Learning can cater to workloads of any scale and complexity. Please see below, a flow for the connected car application demonstrated in the video. This is also a canonical pattern for machine learning solutions built on Machine Learning:

Connected Car demo architecture leveraging Azure Machine Learning

Visual: Connected Car demo architecture leveraging Azure Machine Learning

Now that you understand Azure Machine Learning, let’s look at the two capabilities that stand out:

Automated machine learning

Data scientists spend an inordinate amount of time iterating over models during the experimentation phase. The whole process of trying out different algorithms and hyperparameter combinations until an acceptable model is built is extremely taxing for data scientists, due to the monotonous and non-challenging nature of work. While this is an exercise that yields massive gains in terms of the model efficacy, it sometimes costs too much in terms of time and resources and thus may have a negative return on investment (ROI).

This is where automated machine learning (ML) comes in. It leverages the concepts from the research paper on Probabilistic Matrix Factorization and implements an automated pipeline of trying out intelligently-selected algorithms and hypermeter settings, based on the heuristics of the data presented, keeping into consideration the given problem or scenario. The result of this pipeline is a set of models that are best suited for the given problem and dataset.

A diagram showing Automated machine learning

Visual: Automated machine learning

 

Automated ML supports classification, regression, and forecasting and it includes features such as handling missing values, early termination by a stopping metric, blacklisting algorithms you don’t want to explore, and many more to optimize the time and resources.

Automated ML is designed to help professional data scientists be more productive and spend their precious time concentrating on specialized tasks such as tuning and optimizing the models, alongside mapping real-world cases to ML problems, rather than spending time in monotonous tasks like trial and error with a bunch of algorithms. Automated ML with its newly introduced UI mode (akin to a wizard) also helps open the doors of machine learning to novice or non-professional data scientists as they can now become valuable contributors in data science teams by leveraging these augmented capabilities and churning out accurate models to accelerate time to market. This ability to expand data science teams beyond the handful of highly specialized data scientists enables enterprises to invest and reap the benefits of machine learning at scale without having to compromise high-value use cases due to the lack of data science talent.

To learn more about automated ML in Azure Machine Learning, explore this automated machine learning article.

Machine learning operations (MLOps)

Creating a model is just one part of an ML pipeline, arguably the easier part. To take this model to production and reap benefits of the data science model is a completely different ball game. One has to be able to package the models, deploy the models, track and monitor these models in various deployment targets, collects metrics, use these metrics to determine the efficacy of these models and then enable retraining of the models on the basis of these insights and/or new data. To add to it, all this needs a mechanism that can be automated with the right knobs and dials to allow data science teams to be able to keep a tab and not allow the pipeline to go rogue, which could result in considerable business losses, as these data science models are often linked directly to customer actions.

This problem is very similar to what application development teams face with respect to managing apps and releasing new versions of it at regular intervals with improved features and capabilities. The app dev teams address these with DevOps, which is the industry standard for managing operations for an app dev cycle. To be able to replicate the same to machine learning cycles is not the easiest task.

 A diagram showing the DevOps Process Visual: DevOps Process

 

This is where the Azure Machine Learning shines the most. It presents the most complete and intuitive model lifecycle management experience alongside integrating with Azure DevOps and GitHub.

The first task in the ML lifecycle management, after a data scientist has created and validated a model or an ML pipeline, is that it needs to be packaged, so that it can execute where it needs to be deployed. This means that the ML platform needs to enable containerizing the model with all its dependencies, as containers are the default execution unit across scalable cloud services and the IoT edge. Azure Machine Learning provides an easy way for data scientists to be able to package their models with simple commands that can track all dependencies like conda environments, python versioned libraries, and other libraries that the model references so that the model can execute seamlessly within the deployed environment.

The next step is to be able to version control these models. Now, the code generated, like the Python notebooks or scripts can be easily versioned controlled in GitHub, and this is the recommended approach, but in addition to the notebooks and scripts you also need a way to version control the models, which are different entities than the python files. This is important as data scientists may create multiple versions of the model, and very easily lose track of these in search of better accuracy or performance. Azure Machine Learning provides a central model registry, which forms the foundation of the lifecycle management process. This repository enables version control of models, it stores model metrics, it allows for one-click deployment, and even tracks all deployments of the models so that you can restrict usage, in case the model becomes stale or its efficacy is no longer acceptable. Having this model registry is key as it also helps trigger other activities in the lifecycle when new changes appear, or metrics cross a threshold.

A screenshot of the Model Registry in Azure Machine Learning Visual: Model Registry in Azure Machine Learning

Once a model is packaged and registered, it’s time to test the packaged model. Since the package is a container, it is most ideal to test it in Azure Container Instances, which provides an easy, cost-effective mechanism to deploy containers. The important thing here is you don’t have to go outside Azure Machine Learning, as it has built strong links to Azure Container Instances within its workspace. You can easily set up an Azure Container Instance from within the workspace or from your IDE, where you’re already using Azure Machine Learning, via the SDK. Once you deploy this container to Azure Container Instances, you can easily inference against this model for testing purposes.

Following a thorough round of testing of the model, it is now time to be able to deploy the model into production. Production environments are synonymous with scale, flexibility and tight monitoring capabilities. This is where Azure Kubernetes Services (AKS) can be very useful for container deployments. It provides scale-out capabilities as it’s a cluster and can be sized to cater to the business’ needs. Again, very much like Azure Container Instances, Azure Machine Learning also provides the capability to set up an AKS cluster from within its workspace or the IDE of choice for the user.

If your models are sufficiently small and don’t need scale-out requirements, you can also take your models to production on Azure Container Instances. Usually, that’s not the case, as models are accessed by end-user applications or many different systems, and such planning for scale always helps. Both Azure Container Instances and AKS provide extensive monitoring and logging capabilities.

Once your model is deployed, you want to be able to collect metrics on the model. You want to ascertain that the model is drifting from its objective and that the inference is useful for the business. This means you capture a lot of metrics and analyze them. Azure Machine Learning enables this tracking of metrics for the model is a very efficient manner. The central model registry becomes the one place where all this hosted.

As you collect more metrics and additional data becomes available for training, there may be a need to be able to retrain the model in the hope of improving its accuracy and/or performance. Also, since this is a continuous process of integrations and deployment (CI/CD), there’s a need for this process to be automated. This process of retraining and effective CI/CD of ML models is the biggest strength of Azure Machine Learning.

Azure Machine Learning integrated with Azure DevOps for you to be able to create MLOps pipelines inside the DevOps environment. Azure DevOps has an extension for Azure Machine Learning, which enables it to listen to Azure Machine Learning’s model registry in addition to the code repository maintained in GitHub for the python notebooks and scripts. This enables to trigger Azure Pipelines based on new code commits into the code repository or new models published into the model repository. This is extremely powerful, as data science teams can configure stages for build and release pipelines within Azure DevOps for the machine learning models and completely automate the process.

What’s more, since Azure DevOps is also the environment to manage app lifecycles it now enables data science teams and app dev teams to collaborate seamlessly and trigger new version of the apps whenever certain conditions are met for the MLOps cycle, as they are the ones often leveraging the new versions of the ML models, infusing them into apps or updating inference call URLs, when desired.

This may sound simple and the most logical way of doing it, but nobody has been able to bring MLOps to life with such close-knit integration into the whole process. Azure Machine Learning does an amazing job of it enabling data science teams to become immensely productive.

Please see the diagrammatic representation below for MLOps with Azure Machine Learning.

A diagram showing MLOps with Azure Machine Learning

Visual: MLOps with Azure Machine Learning

To learn more about MLOps please visit the Azure Machine Learning documentation on MLOps.

Get started now!

This has been a long post so thank you for your patience, but this is just the beginning. As we observe, Azure Machine Learning presents capabilities that make the entire ML lifecycle a seamless process. With these two features, we’re just scratching the surface of its capabilities as there are many more features to help data scientists and machine learning engineers create, manage, and deploy their models in a much more robust and thoughtful manner.

And many more to come. Please visit the Getting started guide to start the exciting journey with us!

Microsoft teams up with Mastercard to empower small business owners

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>