Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Announcing TypeScript 3.1 RC

$
0
0
Today we’re happy to announce the availability of the release candidate (RC) of TypeScript 3.1. Our intent with the RC is to gather any and all feedback so that we can ensure our final release is as pleasant as possible.

If you’d like to give it a shot now, you can get the RC through NuGet, or use npm with the following command:

npm install -g typescript@rc

You can also get editor support by

Let’s look at what’s coming in TypeScript 3.1!

Mappable tuple and array types

Mapping over values in a list is one of the most common patterns in programming. As an example, let’s take a look at the following JavaScript code:

function stringifyAll(...elements) {
    return elements.map(x => String(x));
}

The stringifyAll function takes any number of values, converts each element to a string, places each result in a new array, and returns that array. If we want to have the most general type for stringifyAll, we’d declare it as so:

declare function stringifyAll(...elements: unknown[]): Array<string>;

That basically says, “this thing takes any number of elements, and returns an array of strings”; however, we’ve lost a bit of information about elements in that transformation.

Specifically, the type system doesn’t remember the number of elements user passed in, so our output type doesn’t have a known length either. We can do something like that with overloads:

declare function stringifyAll(...elements: []): string[];
declare function stringifyAll(...elements: [unknown]): [string];
declare function stringifyAll(...elements: [unknown, unknown]): [string, string];
declare function stringifyAll(...elements: [unknown, unknown, unknown]): [string, string, string];
// ... etc

Ugh. And we didn’t even cover taking four elements yet. You end up special-casing all of these possible overloads, and you end up with what we like to call the “death by a thousand overloads” problem. Sure, we could use conditional types instead of overloads, but then you’d have a bunch of nested conditional types.

If only there was a way to uniformly map over each of the types here…

Well, TypeScript already has something that sort of does that. TypeScript has a concept called a mapped object type which can generate new types out of existing ones. For example, given the following Person type,

interface Person {
    name: string;
    age: number;
    isHappy: boolean;
}

we might want to convert each property to a string as above:

interface StringyPerson {
    name: string;
    age: string;
    isHappy: string;
}

function stringifyPerson(p: Person) {
    const result = {} as StringyPerson;
    for (const prop in p) {
        result[prop] = String(p[prop]);
    }
    return result;
}

Though notice that stringifyPerson is pretty general. We can abstract the idea of Stringify-ing types using a mapped object type over the properties of any given type:

type Stringify<T> = {
    [K in keyof T]: string
};

For those unfamiliar, we read this as “for every property named K in T, produce a new property of that name with the type string.

and rewrite our function to use that:

function stringifyProps<T>(p: T) {
    const result = {} as Stringify<T>;
    for (const prop in p) {
        result[prop] = String(p[prop]);
    }
    return result;
}

stringifyProps({ hello: 100, world: true }); // has type `{ hello: string, world: string }`

Seems like we have what we want! However, if we tried changing the type of stringifyAll to return a Stringify:

declare function stringifyAll<T extends unknown[]>(...elements: T): Stringify<T>;

And then tried calling it on an array or tuple, we’d only get something that’s almost useful prior to TypeScript 3.1. Let’s give it a shot on an older version of TypeScript like 3.0:

let stringyCoordinates = stringifyAll(100, true);

// No errors!
let first: string = stringyCoordinates[0];
let second: string = stringyCoordinates[1];

Looks like our tuple indexes have been mapped correctly! Let’s check the grab the length now and make sure that’s right:

   let len: 2 = stringyCoordinates.length
//     ~~~
// Type 'string' is not assignable to type '2'.

Uh. string? Well, let’s try to iterate on our coordinates.

   stringyCoordinates.forEach(x => console.log(x));
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// Cannot invoke an expression whose type lacks a call signature. Type 'String' has no compatible call signatures.

Huh? What’s causing this gross error message? Well our Stringify mapped type not only mapped our tuple members, it also mapped over the methods of Array, as well as the length property! So forEach and length both have the type string!

While technically consistent in behavior, the majority of our team felt that this use-case should just work. Rather than introduce a new concept for mapping over a tuple, mapped object types now just “do the right thing” when iterating over tuples and arrays. This means that if you’re already using existing mapped types like Partial or Required from lib.d.ts, they automatically work on tuples and arrays now.

Properties on function declarations

In JavaScript, functions are just objects. This means we can tack properties onto them as we please:

export function readFile(path) {
    // ...
}

readFile.async = function (path, callback) {
    // ...
}

TypeScript’s traditional approach to this has been an extremely versatile construct called namespaces (a.k.a. “internal modules” if you’re old enough to remember). In addition to organizing code, namespaces support the concept of value-merging, where you can add properties to classes and functions in a declarative way:

export function readFile() {
    // ...
}

export namespace readFile {
    export function async() {
        // ...
    }
}

While perhaps elegant for their time, the construct hasn’t aged well. ECMAScript modules have become the preferred format for organizing new code in the broader TypeScript & JavaScript community, and namespaces are TypeScript-specific. Additionally, namespaces don’t merge with var, let, or const declarations, so code like the following (which is motivated by defaultProps from React):

export const FooComponent => ({ name }) => (
    <div>Hello! I am {name}</div>
);

FooComponent.defaultProps = {
    name: "(anonymous)",
};

can’t even simply be converted to

export const FooComponent => ({ name }) => (
    <div>Hello! I am {name}</div>
);

// Doesn't work!
namespace FooComponent {
    export const defaultProps = {
        name: "(anonymous)",
    };
}

All of this collectively can be frustrating since it makes migrating to TypeScript harder.

Given all of this, we felt that it would be better to make TypeScript a bit “smarter” about these sorts of patterns. In TypeScript 3.1, for any function declaration or const declaration that’s initialized with a function, the type-checker will analyze the containing scope to track any added properties. That means that both of the examples – both our readFile as well as our FooComponent examples – work without modification in TypeScript 3.1!

As an added bonus, this functionality in conjunction with TypeScript 3.0’s support for JSX.LibraryManagedAttributes makes migrating an untyped React codebase to TypeScript significantly easier, since it understands which attributes are optional in the presence of defaultProps:

// TypeScript understands that both are valid:
<FooComponent />
<FooComponent name="Nathan" />

Breaking Changes

Our team always strives to avoid introducing breaking changes, but unfortunately there are some to be aware of for TypeScript 3.1.

Vendor-specific declarations removed

TypeScript 3.1 now generates parts of lib.d.ts (and other built-in declaration file libraries) using Web IDL files provided from the WHATWG DOM specification. While this means that lib.d.ts will be easier to keep up-to-date, many vendor-specific types have been removed. We’ve covered this in more detail on our wiki.

Differences in narrowing functions

Using the typeof foo === "function" type guard may provide different results when intersecting with relatively questionable union types composed of {}, Object, or unconstrained generics.

function foo(x: unknown | (() => string)) {
    if (typeof x === "function") {
        let a = x()
    }
}

You can read more on the breaking changes section of our wiki.

Going forward

We’re looking forward to hearing about your experience with the RC. As always, keep an eye on our roadmap to get the whole picture of the release as we stabilize. We expect to ship our final release in just a few weeks, so give it a shot now!


Customizing Azure Blueprints to accelerate AI in healthcare

$
0
0

Artificial Intelligence (AI) holds major potential for healthcare, from predicting the patient length of stays to diagnostic imaging, anti-fraud, and many more use cases. To be successful in using AI, healthcare needs solutions, not projects. Learn how you can close the gap to your AI in healthcare solution by accelerating your initiative using Microsoft Azure blueprints.

To rapidly acquire new capabilities and implement new solutions, healthcare IT and developers can now take advantage of industry-specific Azure Blueprints. Blueprints include resources such as example code, test data, security, and compliance support. These are packages that include reference architectures, guidance, how-to guides, and other documentation, as well as executable code and sample test data built around a key use case of interest to healthcare organizations. Blueprints also contain components to support privacy, security, and compliance initiatives, including threat models, security controls, responsibility matrices, and compliance audit reports.

You can learn more by attending the Accelerating Artificial Intelligence (AI) in Healthcare using Microsoft Azure Blueprints Webcast - Part 2: Customization. Building on the introduction of Microsoft Azure Blueprints webcast, this session dives in deeper focusing on customizing the blueprints to your unique needs and organization. This session is intended for healthcare providers, payers, pharmaceuticals, and life science organizations. Key roles include senior technical decision makers, IT Managers, Cloud Architects, and developers.

Key insights

Next steps

We invite you to register or watch the Accelerating AI in Healthcare using Microsoft Azure Blueprints Webcast - Part 2: Customization on demand.

Video Indexer – General availability and beyond

$
0
0

Earlier today, we announced the general availability (GA) of Video Indexer. This means that our customers can count on all the metadata goodness of Video Indexer to always be available for them to use when running their business. However, this GA is not the only Video Indexer announcement we have for you. In the time since we released Video Indexer to public preview in May 2018, we never stopped innovating and added a wealth of new capabilities to make Video Indexer more insightful and effective for your video and audio needs.

Delightful experience and enhanced widgets

The Video Indexer portal already includes insights and timeline panes that enables our customers to easily review and evaluate media insights. The same experience is also available in embeddable widgets, which are a great way to integrate Video Indexer into any application.

We are now proud to release revamped insight and timeline panes. The new insight and timeline panes are built to accommodate the growing number of insights in Video Indexer and are automatically responsive to different form factors.

gif

By the way, with the new insight pane we have also added visualizations for the already existing keyframes extraction capability, as well as new emotion detection insights. Which brings us to the next set of announcements.

Richer insights unlocked by new models

The core of Video Indexer is of course the rich set of cross-channel (audio, speech, and visual) machine learning models it provides. We are working hard to continue adding more models, and make improvements to our existing models, in order to provide our customers with more insightful metadata on their videos!

Our most recent additions to Video Indexer’s models are the new emotion detection and topic inferencing models. The new emotion detection model detects emotional moments in video and audio assets based on two channels, speech content and voice tonality. It divides them into four emotional states - anger, fear, joy, and sadness. As with other insights detected by Video Indexer, we provide the exact timeframe for each emotion detected in the video and the results are available both in the JSON file we provide for easy integration and in the insight and timeline experiences to be reviewed in the portal, or as embeddable widgets.

Emotions -02

Another important addition to Video Indexer is the ability to do topic inferencing. That is, understand the high-level topics of the video or audio files based on the spoken words and visual cues. This model is different than the keywords extraction model that already exists in Video Indexer. It detects topics in various granularities (e.g. Science, Astronomy, or Missions to Mars) that are inferred from the assets, but not necessarily appear in it, while the keywords extracted will be specific terms that actually appeared in the content. Our topics catalog for this model is sourced from multiple resources, including the IPTC media topics taxonomy, in order to provide the media standard topics.

Note that today’s topics exist in the JSON file. To try them out, simply download the file using the curly braces button below the player or from the API, and search for the topics hammock. Stay tuned for updates on the new user portal experience we are working on!

In addition to the newly released models, we are investing in the improvement of existing models. One of those models is the well-loved celebrity recognition model, which we recently enhanced to cover approximately one million faces based on commonly requested data sources such as IMDB, Wikipedia, and top LinkedIn influencers. Try it out, and who knows maybe you are one of them!

People-01

Another model that was recently enhanced is the custom language model that allows each of our customers to extend the speech-to-text performance of Video Indexer to its own specific content and industry terms. Starting last month, we extended this custom language support to 10 different languages including English, Spanish, Italian, Arabic, Hindi, Chinese, Japanese, Portuguese, and French.

Another important model we recently released is the automatic identification of the spoken language in video. With that new capability customers can easily index batches of videos, without manually providing their language. The model automatically identifies the main language used and invokes the appropriate speech-to-text model.

LID

Easily manage your account

Video Indexer accounts relay on Azure Media Services accounts and use their different components as infrastructure to perform encoding, computation, and streaming of the content as needed.

For easier management of the Azure Media Services resources used by Video Indexer, we recently added visibility into the relevant configuration and states from within the Video Indexer portal. From here you can see at any given time what media resource is used for your indexing jobs, how many reserved units are allocated for indexing and of what type, how many indexing jobs are currently running, and how many are queued.

Additionally, if we identify any configuration that might interfere with your indexing business needs, we will surface those as warnings and errors with a link to the location within your Azure portal to tend to the identified issue. This may include cases such as Event Grid notification registration missing in your subscription, Streaming Endpoints disabled, Reserved Units quantity, and more.

To try it out, simply go to your account settings in the Video Indexer portal and choose the account tab.

Settings (paid) 1.7

In that same section, we also added the ability to auto-scale the computation units used for indexing. That means that you can allocate the maximum amount of computation reserved units in your Media Services account, and Video Indexer will stop and start them automatically as needed. As a result, you won’t pay extra money for idle time and you will not have to wait for indexing jobs to complete when the indexing load is high.

Another addition that can help customers who wish to only extract insights, without the need to view the content, is the no streaming option. If this is the case for you, you can now use this newly added parameter while indexing to avoid the encoding costs, as well as get faster indexing. Please note that selecting this option will prevent your video from playing in the portal player. So if the portal or widgets are leveraged in your solution, you would probably want to keep streaming enabled.

Minimal integration effort

With the public preview a few months back, we also released a new and improved Video Indexer v2 RESTful API. This API enables quick and easy integration of Video Indexer to your application, on either client-to-server or server-to-server architecture.

Following that API, we recently released a new Video Indexer v2 connector for Logic Apps and Flow. You can set up your own custom Video Indexer workflows to further automate the process of extracting deep insights from your videos quickly and easily without writing a single line of code!

Learn more about the new connector and try out example templates.

flow

To make the integration with Video Indexer fit your current workflow and existing infrastructure, we also expanded our closed caption and subtitle file format support with the addition of Sub Rip Text (SRT) and W3C Timed Text (TTML) file formats. Get more information on how to extract the different caption and subtitle formats.

What's next?

The GA launch is just the beginning. As you can see from this blog there is a lot that we have already done, yet there is a whole lot more that we are actively working on! We are excited to continue this journey together with our partners and customers to enhance Video Indexer and make your video and audio content more discoverable, insightful, and valuable to you.

Have questions or feedback? We would love to hear from you! Use our UserVoice to help us prioritize features, or email VISupport@Microsoft.com for any questions.

From Microsoft Azure to everyone attending IBC Show 2018 – Welkom in Amsterdam!

$
0
0

Media and entertainment industry conferences are by far some of my favorites. Creativity, disruption, opportunity, and technology – particularly cloud, edge, and AI – are everywhere. It’s been exciting to see those things come together at NAB 2018, SIGGRAPH, and now IBC Show 2018. Together with teams from across Microsoft, I’m looking forward to IBC Show and the chance to learn, collaborate, and advance the state of this dynamic industry.

At this year’s IBC we’re excited to announce the general availability of Video Indexer, our advanced metadata extraction service. Announced as public preview earlier this year, Video Indexer provides a rich set of cross-channel (audio, speech, and visual) learning models. Check out Sudheer’s blog for more information on all the new capabilities including emotion detection, topic inferencing, and improvements to the ever-popular celebrity recognition model that recognizes over one million faces.

Insights - 4.6

Video Indexer is just one of the ways Azure is helping customers like Endemol Shine, Multichoice, RTL, and Ericsson with their content needs. At IBC 2018, our teams are excited to share new ways that Azure, together with solutions from our partners, can address common media workflow challenges.

How? Well, read on…

More visual effects and animations mean you need to render more, faster than ever before

With Azure you can burst your render jobs from on-premises to the cloud, more easily and securely than you may have thought. Come see a demonstration of how to use Azure Batch, Avere vFXT for Azure (now in public preview), and your favorite rendering applications. We’ll be using Autodesk Maya to accelerate your productions using cloud computing. You can start running jobs right away with our pre-built images, or bring your own custom VM boot images. We’ll take care of the complexities of licensing and there is no need to move your data. Render more with Azure.

Accelerated production schedules with remote contributions to produce more cost effective, scalable cloud workflows

Together with Dejero, Avid, Haivision, Hiscale, Make.TV, and Signiant we’re showcasing how to make “live production in the cloud” a reality. This demonstration uses a live video stream from the field which is sent to Microsoft Azure by a Dejero EnGo mobile transmitter. Dejero dynamically receives the stream in Azure, transcodes it, and delivers it to Make.TV’s Live Video Cloud, which is used to curate and route the content to any number of destinations from within the Avid MediaCentral Cloud UX. Hiscale’s cloud-based transcoding solution enables live recording into customers’ editing and asset management environments where high- and low-resolution files can be stored in Avid NEXIS. For file-based workflows, Signiant’s technology is used to accelerate ingest. Then, HiScale transcodes the files and stores the assets and metadata in Avid’s Interplay MAM. Learn more about live production in the cloud.

Ever-growing content libraries mean you need more intelligent content management

Our customers consistently tell us they want faster, better ways to store, index, manage, and unlock the value of their content libraries. At IBC we’ll demonstrate how Video Indexer allows you to easily extract metadata, such as emotion and topics, from audio and video files. Then we will show you how to access those assets, and the metadata, through our updated portal experience, API, or through third-party asset managers, including Avid, Dalet, and eMAM. This ensure that those deep insights are available right where editors and production crews need and expect them. You can also surface insights in your own custom apps through a new embeddable insights pane or our API. Video Indexer’s models are customizable and extensible, so they can evolve as your needs do.

Video Indexer leverages the Azure Storage platform, which provides cost effective and simple ways to ingest petabytes of content using our own client tools, as well as third-party file transfer accelerators over the Internet or a private ExpressRoute connection. Where offline transfer is faster or cheaper, we have a range of Data Box solutions to handle anything from single-drive to multi-PB bulk transfers. Our offline media import partners are ready to assist with anything from reels of analog video to thousands of LTO tapes. Azure Storage is designed to offer between eleven and sixteen 9s of durability, depending on the replication option you choose. At IBC we’re showing off lifecycle management policies (in preview), which let you control retention periods and automatic tiering between hot, cool, and archive tiers so that your data is always protected and stored in the most cost-effective way.

Fans around the world want to see their favorite event in ever increasing detail, live

At IBC we’re showing how Azure Media Services can help deliver UHD/4K HLG streams from the cloud.  First, Media Excel’s HERO™on-premises encoders push an UHD/4K HEVC contribution feed over the open Internet using the open source low latency streaming protocol, SRT. This feed is received by Media Excel software encoders running in Azure, transcoded into a multiple bitrate HEVC 60 fps streams, and sent to Azure Media Services for dynamic packaging (DASH/HLS with CMAF) and encryption (PlayReady/Widevine/Fairplay). It’s then ready for delivery using Azure CDN or a CDN of your choice.

MultiChoice, a leading provider of sports, movies, series, and general entertainment channels to over 12 million subscribers in Sub-Saharan Africa (through its DStv services), recently completed a pilot using a similar solution to deliver the first UHD live streaming event in South Africa. They found that Microsoft Azure delivered on the promise of a 3rd party managed cloud solution with real-world effectiveness.

Media solutions from our broad partner ecosystem

Whatever your media challenges, our partners are ready to help with solutions, planning, content creation, management, and monetization. You can learn more about these solutions, and the new capabilities in Video Indexer, in Sudheer’s blog.

The future is cloudy, but bright

Enabling the scenarios above is another step towards the not too distant future where cloud, edge, and AI technologies are put to work for you. For a sneak peek into that future check out Cloud Champions, brought you by Microsoft Azure and our partners.

That’s a wrap

If you’re attending IBC Show 2018 please stop by our booth at Hall 1, Booth #C27 to:

  • Chat with product team representatives from Azure Media Services, Azure Storage, Avere, Azure HPC, Cognitive Services, and  Microsoft Skype for Broadcast.
  • Visit with partners from Avid, GreyMeta, Forbidden, Live Arena, Make.TV, Ooyala, Prime Focus Technologies, Teradici, Streaming Buzz, uCast, and Wowza.
  • See some great customer, partner, and product team presentations. To learn more, see the detailed schedule for Microsoft at IBC 2018.

Thanks for reading and have a great show!

Tad

Microsoft Azure Media Services and our partners Welkom you to IBC 2018

$
0
0

Content creators and broadcasters are increasingly embracing Cloud’s global reach, hybrid model and elastic scale. These attributes combined with AI’s ability to accelerate insights and time to market across content creation, management, and monetization are truly transformative.

At the International Broadcasters Conference (IBC) Show 2018, we are focused on bringing Cloud + AI together to help you overcome common media workflow challenges.

Video Indexer, generally available starting today, is a great example of this Cloud + AI focus. It brings together the power of the cloud + Microsoft AI to intelligently analyze your media assets, extract insights and add metadata. It makes it easier to understand your vast content library and get the more than 20 new and improved models, easy to use interfaces, a single API, and simplified account management. I have been part of Video Indexer team since its inception and could not be more excited to see it reach GA. I’m also incredibly proud of the work the team has done to solve real customer problems and make AI tangible in this easy to use elegant solution.

Our partners are already innovating on top of Video Indexer and extending Azure Media Services to advance the state of the art in cloud-based media services and workflows. You can learn more about Video Indexer and our new partner solutions below. You can also check out Tad Brockway’s IBC blog to learn more about new solutions from Azure and our partners that enable compelling workflows across the media value chain.

Microsoft Azure announces general availability of Video Indexer

At IBC 2018, we are thrilled to announce that Video Indexer (VI) is generally available and ready to cater to our media customers’ changing and growing needs.

Announced as a public preview at Microsoft’s Build 2018 conference in May, Video Indexer is an AI-based advanced metadata extraction service. This latest addition to Azure Media Services enables customers to extract insights from Video and Audio files through a rich set of Machine Learning algorithms. Those insights can then be used to improve content discoverability and accessibility, create new monetization opportunities and unlock data-driven experiences.

Insights - 4.6

At its core, Video Indexer orchestrates a cross-channel machine learning analysis (audio, speech, and vision) pipeline for video and audio files, using models that are continuously updated by Microsoft Research. These models bring the power of machine learning to you, enabling you to benefit without having to acquire expertise. Furthermore, our cross-channel models enable even deeper and more accurate insights to be uncovered.

Customers and partners such as AVID, Ooyala, Dalet, Box, Endemol Shine Group, AVROTROS, and eMAM are already using the Video Indexer service for speech to text and closed captioning in ten different languages, visual text recognition (OCR), keywords extraction, label identifications, out of the box and custom brand detection, face identification, celebrity and custom face recognition, sentiment analysis, key frame detection and more.

At GA we are, of course, adding new capabilities. The Emotion recognition model detects emotional moments in video and audio assets based on speech content and voice tonality. Our Topic inferencing model is built to understand the high-level topics of the video or audio files based on spoken words and visual cues. Topics in this model are sourced from IPTC taxonomy among others to align to industry standards. We’ve also enhanced the well-loved celebrity recognition model which now covers one million faces based on commonly requested data sources such as IMDB, Wikipedia, and top LinkedIn influencers.

We make it easy to try out Video Indexer – just upload, process and review video insights using our web portal. You can even customize models in a highly visual way without having to write a line of code. There is no charge to use the portal; however, if you find the experience suits your needs you can connect to an Azure account and use it in production. Existing customers will find new insight and timeline panels that are available in the portal and to embed. These sleek new panels are built to support the growing number of insight visuals and are responsive to different screen form factors.

Get started today using Video Indexer to enable deep search on video and audio archives, reduce editing, content creation costs, and provide innovative customer experiences.

Azure and our partners address your challenges at IBC 2018

At this year’s IBC we’re showcasing progress towards a future where Cloud, Edge, and AI help the media industry compete and thrive.

First up we’ve partnered with Dejero, Avid, Haivision, Hiscale, Make.TV, and Signiant to showcase “live production in the cloud.” Live cloud workflows have historically been a challenge and this demonstration will take us one step closer.

We’re also showcasing how Azure Media Services can help deliver UHD/4K HLG streams from the cloud in partnership with MediaExcel.

MultiChoice, a leading pay-TV operator for more than 12 million subscribers in Sub-Saharan Africa recently completed a pilot using a similar solution to deliver the first UHD live streaming event in South Africa. They found that Microsoft Azure delivered on the promise of a third party managed cloud solution with real-world effectiveness.

You can learn more about these solutions at our booth at IBC or from this blog.

Media solutions powered by a broad ecosystem

From creation to management and monetization, our partners continue to innovate for you.

Content creation

  • Avid MediaCentral’s latest version that just shipped features the powerful MediaCentral | Search app which makes all production and archived assets — stored across multiple local and remote systems — accessible to every in-house and remote contributor. Other features in the latest release include rundown app for story and sequence editing, social media research app for quickly monitoring a story and working it into your rundown, publish app for distributing content quickly across social media platforms, and MediaCentral l Ingest for enabling OP1A transcoding into growing media for editing while capturing and playing out with FastServe l Playout. These services are all built on a new backend on Azure that enables faster deployment and high availability.
  • Nimble Collective recently launched Nimble Studio offering which enables studios and enterprise customers to harness their favorite tools through a powerful and secure pipeline. They recently visited the Microsoft Store in Vancouver to demonstrate how simple it was to get a studio up and running.

Content management

  • Prime Focus Technologies (PFT) has partnered with Microsoft to further strengthen its flagship product, CLEAR™ Media ERP, which currently handles 1.5 million hours of content annually. As part of the collaboration, PFT is migrating its data storage to Azure to provide uninterrupted service to CLEAR customers. Leveraging Azure’s best-in-class cloud services, scale, reach, and rich AI capabilities, CLEAR offers a reliable, secure, scalable, intelligent and media-savvy ERP solution globally. PFT will showcase CLEAR integrated with Microsoft’s powerful Azure cloud services at IBC 2018 - booth #7.C05.
  • Dalet has integrated Video Indexer into Dalet Media Cortex, a cloud-based SaaS, to enable existing Dalet Galaxy customers to consume Cognitive Services on demand. Dalet Media Cortex uses Video Indexer generated metadata to augment the content production experience and its effectiveness. For example, the new Dalet Discovery Panel provides editors with contextual suggestions of relevant content matching their work in progress.
  • Empress Media Asset Management (eMAM) has integrated Video Indexer into its flagship product, eMAM. eMAM is a flexible, scalable media asset management system that can work natively in Azure or in hybrid environments. Organizations can now use Video Indexer to enrich the metadata for current or legacy content.
  • Zoom Media is a Dutch startup specializing in Automated Speech Recognition (ASR) technology.  They have extended the speech-to-text capabilities of Video Indexer, currently supporting ASR for ten languages, to include Dutch, Swedish, Danish, and Norwegian. Microsoft and Zoom Media will present these new features at IBC Show 2018 in Amsterdam.

Content distribution

  • Built on Azure, uCast’s data-driven, OTT Video Platform supports turnkey AVOD, SVOD, TVOD, and Live functionality. uCast’s content monetization platform recently launched Sports Illustrated’s signature OTT video service SI.TV on Azure. It will also host a new Advertising Video-on-Demand (AVOD) service for an Indonesian-based mobile telecommunications provider and their more than 60 million subscribers.
  • Nowtilus, a digital video distribution solutions provider, has deployed its server-side ad-insertion (SSAI) technology for on-demand and live-linear scenarios on Azure. It’s integrated with platforms such as uCast and waipu.tv (Exaring) to offer industry’s best, standards-compliant stream personalization and ad-targeting in TV and VOD.
  • StreamingBuzz has developed StreamingSportzz Fan App, an Azure-based solution that offers a highly interactive experience for sports fans, athletes, and coaches. The experience includes multiple angles, 360 video, VR, and AR modes with statistics and match analysis. They have also have created BuzzOff, an innovative Azure-based streaming solution for in-flight entertainment, that enables passengers on flights to stream offline DRM protected content to their devices without the need to download an app.
  • Media Excel has introduced a hybrid architecture for live contribution and transcoding of UHD adaptive services based on its HERO product line. This architecture enables PayTV operators and content providers to deploy secure scalable live UHD OTT workflows on Azure by combining multiple in-sync encoder instances for a highly-redundant yet cost-effective offering. For a live demo of the end-end solution, please visit Microsoft (1.C27) and Media Excel (14.G18) booths at IBC.
  • Telestream and Microsoft are partnering closely to support content producers, owners, and distributors as well as corporations in their journey to cloud video production and OTT distribution. With the recently launched Vantage and Telestream Cloud solutions on Azure, Telestream offers comprehensive hybrid and cloud-based media processing in Azure to enable broadcasters and content producers to reduce CAPEX, increase agility, and enhance security for global content production. Telestream has developed Wirecast S specifically for Microsoft Stream within Office 365 and is developing Azure-based video quality monitoring solution for virtual, cloud-based deployments.

Industry updates

Microsoft has joined the Secure Reliable Transport (SRT) Alliance! Having pioneered adaptive streaming for the industry and seen the benefits of broad industry adoption, we believe in the need for a simple, scalable and efficient ingest protocol for supporting large scale live events from the cloud. SRT is a proven video transport that lends itself well for cloud-based live workflows. By joining the SRT Alliance, we hope to create broader support, enthusiasm and adoption.

Come see us at IBC 2018

If you’re attending IBC 2018, stop by Hall 1, Booth #C27. In addition to great demos and product team representatives from Azure Media Services, Azure Storage, Avere, Azure Cognitive Services, PlayReady, and Skype for Broadcast, we will also feature the following partner showcases:

  • Avid

  • Blackbird

  • GreyMeta

  • LiveArena

  • StreamingBuzz

  • Make.TV

  • Ooyala

  • Teradici

  • uCast Global

  • Wowza Media Systems

  • X.news

Microsoft will also feature an in-booth presentation theatre with customer, partner and product presentations scheduled throughout the day. Check out a detailed schedule of all presentations and speakers.

If you are not attending the conference but would like to learn more about our media services, follow the Azure Blog to stay up to date on new announcements.

Finally, a big thank you to all our dedicated and growing community of developers, customers, and partners that continue to provide valuable and actionable feedback.

Thank you!

A complete containerized .NET Core Application microservice that is as small as possible

$
0
0

OK, maybe not technically a microservice, but that's a hot buzzword these days, right? A few weeks ago I blogged about Improvements on ASP.NET Core deployments on Zeit's now.sh and making small container images. By the end I was able to cut my container size in half.

The trimming I was using is experimental and very aggressive. If you app loads things at runtime - like ASP.NET Razor Pages sometimes does - you may end up getting weird errors at runtime when a Type is missing. Some types may have been trimmed away!

For example:

fail: Microsoft.AspNetCore.Server.Kestrel[13]

Connection id "0HLGQ1DIEF1KV", Request id "0HLGQ1DIEF1KV:00000001": An unhandled exception was thrown by the application.
System.TypeLoadException: Could not load type 'Microsoft.AspNetCore.Diagnostics.IExceptionHandlerPathFeature' from assembly 'Microsoft.Extensions.Primitives, Version=2.1.1.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'.
at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.Invoke(HttpContext context)
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine)
at Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.HostFiltering.HostFilteringMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Hosting.Internal.HostingApplication.ProcessRequestAsync(Context context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)

Yikes!

I'm doing a self-Contained deployment and then trim the result! Richard Lander has a great dockerfile example. Note how he's doing the package addition with the dotnet CLI with "dotnet add package" and subsequent trim within the Dockerfile (as opposed to you adding it to your local development copy's csproj).

I'm adding the Tree Trimming Linker in the Dockerfile, so the trimming happens when the container image is built. I'm using the dotnet command to "dotnet add package ILLink.Tasks. This means I don't need to reference the linker package at development time - it's all at container build time.

FROM microsoft/dotnet:2.1-sdk-alpine AS build

WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY nuget.config .
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build

FROM build AS publish
WORKDIR /app/superzeit
# add IL Linker package
RUN dotnet add package ILLink.Tasks -v 0.1.5-preview-1841731 -s https://dotnet.myget.org/F/dotnet-core/api/v3/index.json
RUN dotnet publish -c Release -o out -r linux-musl-x64 /p:ShowLinkerSizeComparison=true

FROM microsoft/dotnet:2.1-runtime-deps-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

I did end up hitting this bug in the Linker (it's not Released) but there's an easy workaround. I just need to set the property CrossGenDuringPublish to false in the project file.

If you look at the Advanced Instructions for the Linker you can see that you can "root" types or assemblies. Root means "don't mess with these or stuff that hangs off them." So I just need to exercise my app at runtime and make sure that all the types that my app needs are available, but no unnecessary ones.

I added the Assemblies I wanted to keep (not remove) while trimming/linking to my project file:

<Project Sdk="Microsoft.NET.Sdk.Web">


<PropertyGroup>
<TargetFramework>netcoreapp2.1</TargetFramework>
<CrossGenDuringPublish>false</CrossGenDuringPublish>
</PropertyGroup>

<ItemGroup>
<LinkerRootAssemblies Include="Microsoft.AspNetCore.Mvc.Razor.Extensions;Microsoft.Extensions.FileProviders.Composite;Microsoft.Extensions.Primitives;Microsoft.AspNetCore.Diagnostics.Abstractions" />
</ItemGroup>

<ItemGroup>
<!-- this can be here, or can be done all at runtime in the Dockerfile -->
<!-- <PackageReference Include="ILLink.Tasks" Version="0.1.5-preview-1841731" /> -->
<PackageReference Include="Microsoft.AspNetCore.App" />
</ItemGroup>

</Project>

My strategy for figuring out which assemblies to "root" and exclude from trimming was literally to just iterate. Build, trim, test, add an assembly by reading the error message, and repeat.

This sample ASP.NET Core app will deploy cleanly on Zeit with the smallest image footprint as possible. https://github.com/shanselman/superzeit

Next I'll try an actual Microservice (as opposed to a complete website, which is what this is) and see how small I can get that. Such fun!

UPDATE: This technique works with "dotnet new webapi" as well and is about 73 megs per "docker images" and it's 34 megs when sent and squished through Zeit's "now" CLI.

Small services!


Sponsor: Rider 2018.2 is here! Publishing to IIS, Docker support in the debugger, built-in spell checking, MacBook Touch Bar support, full C# 7.3 support, advanced Unity support, and more.


© 2018 Scott Hanselman. All rights reserved.
     

Because it’s Friday: Hurricane Trackers

$
0
0

With Hurricane Florence battering the US and Typhoon Manghkut bearing down on the Philippines, it's a good time to take a look at the art of visualizing predicted hurricane paths. (By the way, did you know that "typhoon", "hurricane" and "cyclone" are just different names for the same weather phenomenon?) Flowing Data has a good overview of the ways media have been visualizing the predicted path (hat tip: reader MB), including this animation from Axios which does a good job of demonstrating the uncertainty in the forecast:

Axios-hurricane-florence-tracker

A good thing to be aware of, though, is that the cones around the predicted tracks do not represent the size of the storm, but rather the uncertainty in the position of the center of the storm.

For a "live" view though, the place I like to look at is the global wind visualization from the Climate Literacy and Energy Awareness Network. Here's how Florence looks at this writing (3:45AM East Coast time). Click the image to see the current animated view.

Florence

That's all from us at the blog for this week. For those in the path of the storms, good luck and stay safe. 

 

How many deaths were caused by the hurricane in Puerto Rico?

$
0
0

President Trump is once again causing distress by downplaying the number of deaths caused by Hurricane Maria's devastation of Puerto Rico last year. Official estimates initially put the death toll at 15 before raising it to 64 months later, but it was clear even then that those numbers were absurdly low. The government of Puerto Rico commissioned an official report from the Millikan Institute of Public Health at George Washington University (GWU) to obtain a more accurate estimate, and with its interim publication official toll stands at 2,975

Why were the initial estimates so low? I read the interim GWU report to find out. The report itself is clearly written, quite detailed, and composed by an expert team of social and medical scientists, demographers, epidemiologists and biostatisticians, and I find its analysis and conclusions compelling. (Sadly however the code and data behind the analysis have not yet been released; hopefully they will become available when the final report is published.) In short:

  • In the earliest days of the hurricane, the death-recording office was closed and without power, which suppressed the official count.
  • Even once death certificates were collected, it became clear that officials throughout Puerto Rico has not been trained on how to record deaths in the event of a natural disaster, and most deaths were not attributed correctly in official records. 

Given these deficiencies in the usual data used to calculate death tolls (death certificates) the GWU team used a different approach to calculate the death toll. The basis of the method was to estimate excess mortality, in other words, how many deaths occurred in the post-Maria period compared to the number of deaths that would have been expected if it had never happened. This calculation required two quantitative studies:

  • An estimate of what the population would have been if the hurricane hadn't happened. This was based on a GLM model of monthly data from the prior years, accounting for factors including recorded population, normal emigration and mortality rates.
  • The total number of deaths in the post-Maria period, based on death certificates from the Puerto Rico government (irrespective of how the cause of death was coded).
  • (A third study examined the communication protocols before, during and after the disaster. This study did not affect the quantiative conclusions, but formed the basis of some of the report's recommendations.)

The difference between the actual mortality, and the estimated "normal" mortality formed the basis for the estimate of excess deaths attributed to the hurricane. You can see those estimates of excess deaths one month, three months, and five months after the event in the table below; the last column represents the current official estimate.

Excess mortality

These results are consistent in scale with another earlier study by Nishant Kishore et al. (The data and R code behind this study is available on GitHub.) This study attempted to quantify deaths attributed to the hurricane directly, by visiting 3299 randomly chosen households across Puerto Rico. At each household, inhabitants were asked about any household members who had died and their cause of death (related to or unrelated to the hurricane), and whether anyone had left Puerto Rico because of the hurricane. From this survey, the paper's authors extrapolated the number hurricane-related deaths to the entire island. The headline estimate of 4,625 at three months is somewhat larger than the middle column of the study above, but due to the small number of recorded deaths in the survey sample the 95% confidence interval is also much larger: 793 to 8498 excess deaths. (Gelman's blog has some good discussion of this earlier study, including some commentary from the authors.)

With two independent studies reporting excess deaths well into the thousands attributable directly to Hurricane Maria, it's a fair question to ask whether a more effective response before and after the storm could have reduced the scale of this human tragedy.

Milken Institute School of Public Health: Study to Estimate the Excess Deaths from Hurricane Maria in Puerto Rico


Announcing new REST API’s for Process Customization

$
0
0
Last sprint we released a new set of REST API endpoints for process customization. In version 4.1 there are 3 sets of REST API’s. Two for the inherited model and one for the Hosted XML model. This created some confusion on what endpoints to use and when. In the new 5.0 (preview) version we combined... Read More

Top Stories from the Microsoft DevOps Community – 2018.09.14

$
0
0
Wow, y’all: seven years ago today, at the BUILD conference, we announced the preview of what we called “Team Foundation Service”. That service offering became Visual Studio Team Services. And on Monday, we announced the newest evolution of what that vision has become: Azure DevOps. Azure DevOps is a family of tools to help you... Read More

Azure.Source – Volume 49

$
0
0

Welcome to Azure DevOps

The big news last week was the introduction of Azure DevOps, which represents the evolution of Visual Studio Team Services (VSTS). Azure DevOps is a set of five new services that can be used together as a unified product, independently as stand-alone services or in any combination: Azure Pipelines, Azure Boards, Azure Repos, Azure Test Plans, and Azure Artifacts. Azure Pipelines is a fully-managed CI/CD service that enables developers to continuously build, test, and deploy any type of app, to any platform or cloud, which is available with free, unlimited CI/CD minutes for open source projects and integrated with the GitHub CI marketplace. In partnership with GitHub, we built an extension for Visual Studio Code that give developers the ability to review GitHub pull request source code from within the editor.

 

Azure DevOps & Azure Pipelines Launch Keynote - Learn all about our announcement from hosts Jamie Cool, Donovan Brown and guests who will cover what's new in Azure DevOps, Azure Pipelines, our GitHub CI integration and much more. Watch more content here: aka.ms/AzureDevOpsLaunch.

Introducing Azure DevOps - Announcement blog post from Jamie Cool, Director of Program Management, Azure DevOps that provides a high-level overview of what Azure DevOps is, briefly covers how Open Source projects receive free CI/CD with Azure Pipelines, and outlines the evolution from Visual Studio Team Services.

Announcing Azure Pipelines with unlimited CI/CD minutes for open source - Azure Pipelines is a CI/CD service that enables you to continuously build, test, and deploy to any platform or cloud. Azure Pipelines also provides unlimited CI/CD minutes and 10 parallel jobs to every open source project for free. Use the Azure Pipelines app in the GitHub Marketplace to make it easy to get started.

Screenshot of Azure Pipelines example

Deep dive into Azure Boards - Azure Boards is a service for managing the work for your software projects. Teams need tools that flex and grow. Azure Boards does just that, brining you a rich set of capabilities including native support for Scrum and Kanban, customizable dashboards, and integrated reporting. In this post, Aaron Bjork, Principal Group Program Manager, Azure DevOps goes through a few core features in Azure Boards and give some insight in to how you can make them work for your teams and projects.

Learn more about Azure DevOps:

Now generally available

Video Indexer – General availability and beyond - At the International Broadcasters Conference (IBC) Show 2018, we announced the general availability of Video Indexer, which is a cloud application built on Azure Media Analytics, Azure Search, Cognitive Services (such as the Face API, Microsoft Translator, the Computer Vision API, and Custom Speech Service). It enables you to extract the insights from your videos using Video Indexer's cross-channel (audio, speech, and visual) machine learning models, such as emotion detection and topic inferencing. We also released a new Video Indexer v2 connector for Logic Apps and Flow, which enables you to set up your own custom Video Indexer workflows to further automate the process of extracting deep insights from your videos quickly and easily without writing code.

The Azure Podcast

The Azure Podcast | Episode 246 - South Central US outage discussion - Azure services and customers being impacted. Kendall, Evan and Sujit break down the outage and try to understand how Microsoft and its customers can be better prepared from such unplanned events.

News and updates

Application Insights improvements for Java and Node.js - Get an overview of recent improvements in Azure Monitor to enable a first-class monitoring experience for Java and Node.js teams in both their Azure and on-premises environments. Note that all of Application Insights SDKs are open source, including Java and Node.js.

HDInsight Tools for VSCode: Integrations with Azure Account and HDInsight Explorer - HDInsight Tools for VSCode extension now integrates with the Azure Account extension, which makes your Azure HDInsight sign-in experience even easier. This release also introduces a graphical tree view for the HDInsight Explorer within Visual Studio Code. HDInsight Explorer enables you to navigate HDInsight Hive and Spark clusters across subscriptions and tenants, browse Azure Data Lake Storage and Blob Storage connected to these HDInsight clusters, and inspect your Hive metadata database and table schema.

Announcing the New Auto Healing Experience in App Service Diagnostics - App Service Diagnostics helps you diagnose and solve issues with your web app by following recommended troubleshooting and next steps. You may be able to resolve unexpected behaviors temporarily with some simple mitigation steps, such as restarting the process or starting another executable, or require additional data collection, so that you can better troubleshoot the ongoing issue at a later time. Using the new Auto Healing tile shortcut under Diagnostic Tools in App Service Diagnostics, you can set up custom mitigation actions to run when certain conditions are met.

New Price Drops for App Service on Linux - We’re extending the preview price (for Linux on App Service Environment, which is the Linux Isolated App Service Plan SKU) for a limited time through GA. App Service on Linux is a fully managed platform that enables you to build, deploy, and globally scale your apps more quickly. You can bring your code to App Service on Linux and take advantage of the built-in images for popular supported language stacks, such as Node, Java, PHP, etc., or bring your Docker container to easily deploy to Web App for Containers. We'll provide a 30-day notice before this offer ends, which is TBD.

Azure Friday

Azure Friday | Azure State Configuration experience - Michael Greene joins Scott Hanselman to discuss a new set of experiences for Configuration Management in Azure, and how anyone new to modern management can discover and learn new process more quickly than before.

Azure Friday | Unlock petabyte-scale datasets in Azure with aggregations in Power BI - Christian Wade joins Scott Hanselman to show you how to unlock petabyte-scale datasets in Azure with a way that was not previously possible. Learn how to use the aggregations feature in Power BI to enable interactive analysis over big data.

Technical content

Azure preparedness for weather events - Learn how we’re preparing for and actively monitoring Azure infrastructure in regions impacted by Hurricane Florence and Typhoon Manghkhut. As a best practice, all customers should consider their disaster recovery plans and all mission-critical applications should be taking advantage of geo-replication. You can reach our handle @AzureSupport on Twitter, we are online 24/7. Any business impact to customers will be communicated through Azure Service Health in Azure portal.

GPUs vs CPUs for deployment of deep learning models - Get a detailed comparison of the deployments of various deep learning models to highlight the striking differences in the throughput performance of GPU versus CPU deployments to provide evidence that, at least in the scenarios tested, GPUs provide better throughput and stability at a lower cost. For standard machine learning models where number of parameters are not as high as deep learning models, CPUs should still be considered as more effective and cost efficient. For deep learning inference tasks which use models with high number of parameters, GPU based deployments benefit from the lack of resource contention and provide significantly higher throughput values compared to a CPU cluster of similar cost.

How to extract building footprints from satellite images using deep learning - This post from Siyu Yang, Data Scientist, AI for Earth, highlights a sample project that uses Azure infrastructure for training a deep learning model to gain insight from geospatial data. Such tools will finally enable us to accurately monitor and measure the impact of our solutions to problems such as deforestation and human-wildlife conflict, helping us to invest in the most effective conservation efforts. If you deal with geospatial data, did you know that Azure already offers a Geo Artificial Intelligence Data Science Virtual Machine (Geo-DSVM), equipped with ESRI’s ArcGIS Pro Geographic Information System. Get started with a tutorial on how to use the Geo-DSVM for training deep learning models and integrating them with ArcGIS Pro.

Diagram showing combination of geospatial data and AI at scale to deliver and intelligent geospatial data application

How Security Center and Log Analytics can be used for Threat Hunting - Azure Security Center (ASC) uses advanced analytics and global threat intelligence to detect malicious threats, and the new capabilities that our product team is adding everyday empower our customers to respond quickly to these threats. No security tool can detect 100 percent of the attack, and many of the tools that raise alerts are optimized for low false positive rates. In this post, learn how to adopt a threat hunting mindset by proactively and iteratively searching through your varied log data with the goal of detecting threats that evade existing security solutions. Azure Security Center has built-in features that you can use to launch your investigations and hunting campaigns in addition to responding to alerts that it triggers.

Five habits of highly effective Azure users - Based on customer interactions, we're compiling a list of routine activities that can help you get the most out of Azure, including staying on top of proven practice recommendations, staying in control of your resources on the go, staying informed during issues and maintenance, and staying up-to-date with the latest announcements. Read this post to learn more about these activities. In addition, staying engaged with your peers to share good habits they’ve discovered and learn new ones from the community is also valuable.

Additional technical content

Azure tips & tricks

Screenshot from How to deploy an Azure Web App using only the CLI tool video

How to deploy an Azure Web App using only the CLI tool - Learn how to successfully deploy an Azure Web App by using only the command-line (CLI) tool. Watch to learn how the Azure portal is not only helpful for working with resources, but it also is convenient for using a command-line to deploy web applications.

Screenshot from How to work with files in Azure App Service video

How to work with files in Azure App Service - Learn how to work with files that you’ve uploaded to Azure App Service. Watch to find out what the different options are for interacting with the file system and your deployed applications in the Azure portal.

Events

Microsoft Ignite 2018 - If you're not able to join us next week for this premiere event, be sure to watch online to watch the live stream from Orlando.

Microsoft Azure Media Services and our partners Welkom you to IBC 2018 - International Broadcasters Conference (IBC) Show 2018 took place last week in Amsterdam. In this post, Sudheer Sirivara, General Manager, Azure Media, covers the announcement that Video Indexer is generally available, how we partnered to showcase "live production in the cloud," how our partners are innovating to deliver a broad ecosystem of media solutions, and that Microsoft has joined the Secure Reliable Transport (SRT) Alliance.

From Microsoft Azure to everyone attending IBC Show 2018 – Welkom in Amsterdam! - In this post, Tad Brockway, General Manager, Azure Storage & Azure Stack, shares new ways that Azure, together with solutions from our partners, can address common media workflow challenges.

The IoT Show

Internet of Things Show | Join IoT in Action to Build Transformational IoT Solutions - Gain actionable insights, deepen partnerships, and unlock the transformative potential of intelligent edge and intelligent cloud solutions at this year's IoT in Action event series. This event series is a chance for you to meet and collaborate with Microsoft's customers and partner ecosystem to build and deploy new IoT solutions that can be used to change the world around us.

Internet of Things Show | iotz: a new approach to IoT compile toolchains - iotz is a command line tool that aims at simplifying the whole process. Oguz Bastemur, developer in the Azure IoT team, joins us on the IoT Show to explain and show how iotz can be used to streamline compilation of embedded projects for IoT devices.

Customers and partners

AI helps troubleshoot an intermittent SQL Database performance issue in one day - Learn how Azure SQL Database intelligent performance feature Intelligent Insights to help a customer troubleshoot a hard to find 6-month intermittent database performance issue in a single day only, how Intelligent Insights helps an ISV operate 60,000 databases by identifying related performance issues across their database fleet, and how Intelligent Insights helped an enterprise seamlessly identify a hard to troubleshoot performance degradation issue on a large-scale 35TB database fleet.

Illustration showing the use of Intelligent Insights to identify a performance issue in large group of Azure SQL Databases

Real-time data analytics and Azure Data Lake Storage Gen2 - We are actively partnering with leading ISV’s across the big data spectrum of platform providers, data movement and ETL, governance and data lifecycle management (DLM), analysis, presentation, and beyond to ensure seamless integration between Gen2 and their solutions. Learn how we're partnering with Attunity to help customers learn more about real-time analytics, data lakes and how you can quickly move from evaluation to execution. And join us for our first joint Gen2 engineering-ISV webinar with Attunity tomorrow, Tuesday, September 18th.

Azure Marketplace new offers - Volume 19 - The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. You can also find consulting services from hundreds of leading providers. In the first half of August we published 50 new offers, including: Informix, BitDam, and consulting services from Lixar.

Industries

The Azure Security and Compliance Blueprint - HIPAA/HITRUST Health Data and AI offers a turn-key deployment of an Azure PaaS solution to demonstrate how to securely ingest, store, analyze, and interact with health data while being able to meet industry compliance requirements. The blueprint helps accelerate cloud adoption and utilization for customers with data that is regulated.

Learn more in these posts from last week:

Diagram showing operational process flow for admitting a patient

Reduce false positives, become more efficient by automating anti-money laundering detection - Learn how without human intervention, it is difficult, almost impossible to adapt to the rapidly evolving patterns used by money launders or terrorists. We have many partners that address bank challenges with fraud. Among that elite group, Behavioral Biometrics solution from BioCatch and the Onfido Identity Verification Solution help automate fraud detection through frictionless detection.

Retail brands: gain a competitive advantage with modern data management - Much of the data collected by retailers goes unused. This occurs because the infrastructure within an organization is unable to make the data accessible or searchable. Learn how great data management provides a significant strategic advantage and enables brand differentiation when serving customers.

A Cloud Guru's Azure This Week

Screenshot from A Cloud Guru's Azure This Week - 14 September 2018 video

A Cloud Guru | Azure This Week - 14 September 2018 - This time on Azure This Week, Lars talks about the rebranding of Visual Studio Team Services, the major Azure data center outage in Texas and some great new tools for Spark developers using HDInsights.

Come check out Azure Stack at Ignite 2018

$
0
0

We are excited to host you all at this year’s Ignite conference. The Azure Stack team has put together a list of sessions along with a pre-day event to ensure that you will enhance your skills on Microsoft’s hybrid cloud solution and get the most out of this year’s conference.

We have an agenda that is tailored for developers who use Azure Stack to develop innovative hybrid solutions using services on Azure Stack and Azure, as well as operators who are responsible for the operations, security, and resiliency of Azure Stack itself. Whether you’re a developer or an IT operator, there’s something for you.

To fully benefit from our sessions we recommended you attend our two overview talks, “Intelligent Edge with Azure Stack” and “Azure Stack Overview and Roadmap”. If you’re looking to learn how to operate Azure Stack, we recommend you attend “The Guide to Becoming an Azure Stack Operator” to learn what it takes to get the most of your investment. If you’re just “Getting started with Microsoft Azure Stack as a developer”, we’ve created a path for you as well. See the learning map below:

Azure Stack learning map diagram
  The following table list the details of each session.

 

Date

Time

Session

Title

Sunday, September 23, 2018 8:00 AM – 5:00 PM PRE 28 Azure Stack Pre-Day (Building and operating hybrid cloud solutions with Azure and Azure Stack)
Tuesday, September 25, 2018 10:45 AM - 12:00 PM BRK2367 Azure Stack overview and roadmap
Tuesday, September 25, 2018 11:55 AM - 12:15 PM THR2057 Building solutions for public industry vertical with Microsoft Azure Stack
Tuesday, September 25, 2018 12:45 PM - 1:30 PM BRK2373 Getting started with Microsoft Azure Stack as a developer
Tuesday, September 25, 2018 2:15 PM - 3:30 PM BRK2297 Intelligent Edge With Azure Stack
Tuesday, September 25, 2018 4:00 PM - 4:20 PM THR2058 What you need to know to run Microsoft Azure Stack as a CSP
Wednesday, September 26, 2018 9:00 AM - 10:15 AM BRK3334 The Guide to Becoming an Azure Stack Operator
Wednesday, September 26, 2018 10:45 AM - 12:00 PM BRK2374 Understanding hybrid application patterns for Azure Stack
Wednesday, September 26, 2018 2:15 PM - 2:35 PM THR3027 Machine learning applications in Microsoft Azure Stack
Thursday, September 27, 2018 9:00 AM - 9:45 AM BRK3288 Implementing DevOps in Microsoft Azure Stack
Thursday, September 27, 2018 11:30 AM - 12:15 PM BRK2305 Discovering Security design principles and key use cases for Azure Stack
Thursday, September 27, 2018 12:30 PM - 1:45 PM BRK3317 Best Practices for Planning Azure Stack deployment and post-deployment integrations with Azure
Thursday, September 27, 2018 2:00 PM - 2:45 PM BRK3335 Architectural patterns and practices for business continuity and disaster recovery on Azure Stack
Friday, September 28, 2018 12:30 PM - 1:45 PM BRK3318 Accelerate Application Development through OpenSource Frameworks and Marketplace Items


In addition, the following are a selection of Azure Stack related sessions from our hardware partners:

  • Intel: BRK2448 – Driving business value from a modern, cloud-ready platform
  • Dell EMC: BRK2441 – Why architecture matters: A closer look at Dell EMC solutions for Microsoft WSSD, Azure Stack, and SQL Server
  • Lenovo: THR2350 – What are Lenovo and Microsoft Azure Stack customers experiencing?
  • Cisco: BRK2427 – Secure your cloud journey, from data center to the edge with Cisco
  • HPE: BRK1123 – How to tame your hybrid cloud

Finally, on the Ignite expo floor, you can find the Azure Stack team in 3 booths (341, 342, 344) inside of our Intelligent Edge section. Wondering what intelligent edge is? Ask anyone on our team. Many of us along with our partners will be in Orlando and we look forward to meeting you.

Cheng Wei (@cheng__wei)

Principal PM Manager, Azure Stack

Jenkins Azure ACR Build plugin now in public preview

$
0
0

Last year at Jenkins World, we announced Jenkins on Azure support for Kubernetes. We shipped the Azure Container Agent which allows you to scale out to Azure and run a Jenkins Agent on Azure Container Instances (ACI) and/or Azure Kubernetes Service (AKS). Using the Kubernetes Continuous Deploy or Deploy to Azure Container Services (AKS) plugins, you can deploy containers to Kubernetes.

Back in April, we published a blog post in Kubernetes.io sharing with the community how to achieve Blue/Green deployment to Azure Container Services (AKS). Some questions remained to be answered though:

  • What if I need to build a Docker image when I use ACI as my Jenkins build agent?
  • If I run Docker Build on AKS, is it secured?

Earlier this year, the Azure Container Registry team released a preview of a native container build capability called Azure Container Registry (ACR) Build, which solves just these problems. One of the best things about ACR build is you only pay for the compute you use to build your images.

Build from local directory

Let’s say you have an existing pipeline that uses Maven to build your Java project and then deploys to AKS:

node {
    /* … snip… */

    stage('Build') {
    sh 'mvn clean package'
    withCredentials([usernamePassword(credentialsId: env.ACR_CRED_ID, usernameVariable: 'ACR_USER', passwordVariable: 'ACR_PASSWORD')]{
      sh 'docker login -u $ACR_USER -p $ACR_PASSWORD http://$ACR_SERVER'
      // build image
      def imageWithTag = "$env.ACR_SERVER/$env.WEB_APP:$env.BUILD_NUMBER"
      def image = docker.build imageWithTag
      // push image
      image.push()
    }
    stage(‘Deploy’) {
      /*… snip… */
    } 
}

Since ACR Build supports builds from your local directory (in this case the build server local directory), you can replace the five lines of code with one line in your pipeline like this:

node {
  /* … snip… */
  stage('Build') { 
    sh 'mvn clean package'

    acrQuickBuild azureCredentialsId: 'principal-credentials-id',
                  resourceGroupName: env.ACR_RES_GROUP,
                  registryName: env.ACR_NAME,
                  platform: "Linux",
                  dockerfile: "Dockerfile",
                  imageNames: [[image: "$env.ACR_REGISTRY/$env.IMAGE_NAME:$env.BUILD_NUMBER"]]
  }  
  stage(‘Deploy’) {
    /*… snip… */
  }
}

The benefits

  • Apart from AKS, you can now run this build pipeline in ACI.
  • ACR Build enables network close, multi-tenant builds, reducing the network distance, and ensuring reliability of Docker push to the registry.
  • Best yet, you no longer need to get into another debate with your peers about whether it is safe to run Docker on Docker.

Build based on git commits

What if you are setting up a new pipeline and just want to trigger build upon code commit? Fear not, ACR Build supports commit based builds. We set up a sample Jenkins file that allows you to build a Spring Boot Web App in ACR with deployment to AKS. In this case, once code is committed to GitHub, Jenkins will trigger the build in ACR, you can run tests (not covered in sample) and then deploy the Docker image to production. Simply follow the instructions on building a Docker image from git repo in ACR then deploying to AKS using Jenkins.

We will preview Azure ACR plugin at Jenkins World 2018. We will also have a few demos showing how to deploy to App Service with Tomcat and Java SE.

Please drop by the Azure Jenkins booth, see a demo, or chat with us about how you are integrating Jenkins with Azure. We are always looking for feedback and to hear more about your build systems.

Programmatically onboard and manage your subscriptions in Azure Security Center

$
0
0

This post was co-authored by Tiander Turpijn, Senior Program Manager.

Securing your Azure workloads has become easier with the release of Azure Security Center (ASC) official PowerShell Module!

Many organizations are looking to automate more tasks, as manual work is prone to human error and creates a potential for duplicative work. The need for automation is especially prevalent when it comes to large scale deployments that involve dozens of subscriptions with hundreds and thousands of resources – all of which must be secured from the beginning.

To streamline the security aspects of the DevOps lifecycle, ASC has recently released its official PowerShell module. This enables organizations to programmatically automate onboarding and management of their Azure resources in ASC and adding the necessary security controls.

This blog will focus on using PowerShell to onboard ASC. Future blog posts will demonstrate how you can use PowerShell to automate the management of your resources in ASC.

In this example, we will enable Security Center on a subscription with ID: d07c0080-170c-4c24-861d-9c817742786c and apply the recommended settings that provide a high level of protection, by implementing the standard tier of Security Center, which provides advanced threat protection and detection capabilities:

  1. Set the ASC to standard. Learn more about what ASC has to offer in its two tiers: Free and Standard.
  2. Set the Log Analytics workspace to which the Microsoft Monitoring Agent will send data it collects on the VMs associated with the subscription (in our case, an existing user defined workspace, as myWorkspace). To learn more, read data collection in ASC.
  3. Activate Security Center’s automatic agent provisioning which deploys to Microsoft Monitoring Agent.
  4. Set the organization’s CISO as the security contact for ASC alerts and notable events.
  5. Assign Security Center’s default security policies.

Pre-requisites: these steps should be performed prior to running the Security Center cmdlets, to ensure your environment has all pre-requisites and dependencies installed:

  1. Run powershell as admin
  2. Install-Module -Name PowerShellGet -Force
  3. Set-ExecutionPolicy -ExecutionPolicy AllSigned
  4. Import-Module PowerShellGet
  5. Install-Module -Name AzureRM.profile -RequiredVersion 5.5.0
  6. Restart powershell
  7. Install-Module -Name AzureRM.Security -AllowPrerelease –Force

a. Register your subscriptions to the Security Center Resource Provider

    Set-AzureRmContext -Subscription "d07c0080-170c-4c24-861d-9c817742786c”
    Register-AzureRmResourceProvider -ProviderNamespace ‘Microsoft.Security’

      b. Set the coverage level (pricing tier) of the subscriptions (This is optional. If it’s not defined, the pricing tier will be free.)

        ​​Set-AzureRmContext -Subscription "d07c0080-170c-4c24-861d-9c817742786c”
        Set-AzureRmSecurityPricing -Name "default" -PricingTier "Standard"

          c. Configure the workspace to which the agents will report to (This is optional. If it’s not defined, the default workspace will be used.)

            Pre-requisite: Create a Log Analytics workspace to which the subscription’s VMs will report to. You can define multiple subscriptions to report to the same workspace.

            Set-AzureRmSecurityWorkspaceSetting -Name "default" -Scope "/subscriptions/d07c0080-170c-4c24-861d-9c817742786c" -WorkspaceId "/subscriptions/d07c0080-170c-4c24-861d-9c817742786c/resourceGroups/myRg/providers/Microsoft.OperationalInsights/workspaces/myWorkspace"

              d. Define automatic provisioning of the Microsoft Monitoring Agent on your Azure VMs (This is optional. If not automatic, the agent can be manually installed.)

                Set-AzureRmContext -Subscription "d07c0080-170c-4c24-861d-9c817742786c”
                Set-AzureRmSecurityAutoProvisioningSetting -Name "default" -EnableAutoProvision

                  e. Define security contact details (optional).

                    It is highly recommended you define the security contact details for the subscriptions you onboard, as these contacts will be used as the recipients of alerts and notifications generated by Security Center.

                    Set-AzureRmSecurityContact -Name "default1" -Email "CISO@my-org.com" -Phone "2142754038" -AlertsAdmin -NotifyOnAlert

                      f. Assign the default Security Center policy initiative

                        Register-AzureRmResourceProvider -ProviderNamespace 'Microsoft.PolicyInsights'
                        $Policy = Get-AzureRmPolicySetDefinition -Name ' [Preview]: Enable Monitoring in Azure Security Center'
                        
                        New-AzureRmPolicyAssignment -Name 'ASC Default <d07c0080-170c-4c24-861d-9c817742786c>' -DisplayName 'Security Center Default <subscription ID>' -PolicySetDefinition $Policy -Scope ‘/subscriptions/d07c0080-170c-4c24-861d-9c817742786c’

                        You can now use the official PowerShell cmdlets with automation scripts to programmatically iterate on multiple subscriptions/resources, reducing the overhead caused by manually performing these actions, as well as reduce the potential risk of human error resulting from manual actions. For more information, refer to the ASC sample script.

                        To learn more about how you can use PowerShell to automate onboarding to Security Center, visit our documentation.

                        HDInsight tools for Visual Studio Code: simplifying cluster and Spark job configuration management

                        $
                        0
                        0

                        We are happy to announce that HDInsight Tools for Visual Studio Code (VS Code) now leverage VS Code built-in user settings and workspace settings to manage HDInsight clusters and Spark job submissions. With this feature, you can manage your linked clusters and set your preferred Azure environment with VS Code user settings. You can also set your default cluster and manage your job submission configurations via VS Code workspace settings.

                        HDInsight Tools for VS Code can access HDInsight clusters in Azure regions worldwide. To grant more flexible access to HDInsight clusters, you can  access the clusters through your Azure subscriptions, by linking to your HDInsight cluster using your Ambari username and password, or by connecting to an HDInsight Enterprise Security Package Cluster via the domain name and password. All Azure settings and linked HDInsight clusters are kept in VS Code user settings for your future use. The Spark job submission can support up to a hundred parameters to give you the flexibility to maximize cluster computing resources usage, and also allow you to specify the right parameters to optimize your Spark job performance. By leveraging the VS Code workspace setting, you have the flexibility to specify parameters in JSON format.

                        Summary of new features

                        • Leverage VS Code user settings to manage your cluster and environments.
                          • Set Azure Environment: Choose command HDInsight: Set Azure Environment. The specified Azure environment will be your default Azure environment for cluster navigation, data queries, and job submissions.
                          • Link a Cluster: Choose command HDInsight: Link a Cluster. The linked cluster information is saved in  user settings.
                        • Use the VS Code workspace setting to manage your PySpark job submission.
                          • Set Default Cluster:  Choose command HDInsight: Set Default Cluster. The specified cluster will be your default cluster for PySpark or Hive data queries and job submissions.
                          • Set Configurations:  Choose command HDInsight: Set Configurations to specify parameter values for your Spark job Livy configurations.

                        How to install or update

                        First, install Visual Studio Code and download Mono 4.2.x (for Linux and Mac). Then get the latest HDInsight Tools by going to the VSCode Extension repository or the VSCode Marketplace and searching HDInsight Tools for VSCode.

                        imageFor more information about HDInsight Tools for VSCode, please use the following resources:

                        Learn more about today’s announcements on the Big Data blog. Discover more on the Azure service updates page.

                        If you have questions, feedback, comments, or bug reports, please use the comments below or send a note to hdivstool@microsoft.com.


                        Run Ubuntu virtual machines made even easier with Hyper-V Quick Create

                        $
                        0
                        0

                        Today, we’ve made running Linux even easier on Windows 10. With the Hyper-V Quick Create feature added in the Windows 10 Fall Creators Update, we have partnered with Ubuntu and added a virtual machine image so in a few quick minutes, you’ll be up and developing. This is available now – just type “Hyper-V Quick Create” in your start menu!

                        The Hyper-V Quick Create in your start menu.

                        Please note, this feature does require Hyper-V. Please head over to the docs to learn more about Hyper-V and enabling it.

                        On top of running Ubuntu in a virtual machine, you can use Windows Subsystem for Linux. WSL is a Windows 10 feature that enables you to run native Linux command-line tools directly on Windows. WSL is an extremely easy to install feature on Windows 10, and you can run Ubuntu, Suse, Debian and other distros as well. And if you want to build your own distro and use that, you can too!

                        The post Run Ubuntu virtual machines made even easier with Hyper-V Quick Create appeared first on Windows Developer Blog.

                        Telstra empowers its employees to do their best work from anywhere with Microsoft Office 365

                        $
                        0
                        0

                        The Telstra logo.

                        Today’s post was written by Gregory Koteras, general manager of digital workplace solutions at Telstra in Melbourne, Australia.

                        Image of Gregory Koteras, general manager of digital workplace solutions at Telstra in Melbourne, Australia.At Telstra, our mission is to connect people. We’re Australia’s leading telecommunications and technology company, providing mobile phone and internet access to 17.6 million retail customers.

                        We’re currently fundamentally re-engineering how we operate through our new T22 strategy, designed to remove complexity and management layers, decrease the focus on hierarchical decision making, and increase the focus on empowered teams making decisions closer to the customer.

                        The strategy leverages the significant capabilities already being built through Telstra’s up to $3 billion strategic investment announced in August 2016 in creating the Networks for the Future and digitizing the business.

                        The key to any successful organizational change is having engaged and empowered people. One of the ways we’re doing this is by providing new tools and systems that our employees can use to connect across more than 20 countries around the world. This includes outfitting our employees and contractors with Microsoft Office 365 to provide state-of-the-art collaboration and conferencing tools needed to design better services and transform our customers’ experience.

                        We also know how important it is to give our people a voice, and we use Yammer to let all employees connect with each other, ask questions, and get the answers they need. Conversely, Telstra executives use Yammer to engage with our global staff and rally support for corporate initiatives. Yammer is our corporate living room. There are thousands of work-related conversations happening there, but also book club groups, fitness groups, Brilliant Connected Women groups, and technical interest groups.

                        We’re also proud to be a corporate leader in serving customers with disabilities and addressing barriers to accessibility and inclusion. And that extends to our people. With the built-in accessibility features in Office 365 ProPlus, such as screen reader support, voice alerts, and keyboard shortcuts, all Telstra employees can use these new tools to be part of company conversations.

                        In March 2014, Telstra adopted a flexible workstyle model called All Roles Flex, which recognizes the need for flexible hours and modes for different job roles. It includes part-time work, working outside normal nine-to-five business hours, and working from different locations. To support this way of working, our people need to have access to the best tools and services, so they can connect anywhere, anytime. Office 365 gives them the flexibility and functionality to do that.

                        As we focus on transforming our company, the tools we provide our people will play a critical role. By greatly simplifying our structure and ways of working, we empower our people and better serve our customers.

                        Read the case study to learn how Telstra is creating a simpler and productive workplace with Microsoft Office 365.

                        The post Telstra empowers its employees to do their best work from anywhere with Microsoft Office 365 appeared first on Microsoft 365 Blog.

                        The future of ASP.NET SignalR

                        $
                        0
                        0

                        In ASP.NET Core 2.1, we brought SignalR into the ASP.NET Core family. Many of our users have asked what this means for the previous version of SignalR: ASP.NET SignalR.

                        As a reminder, ASP.NET SignalR is represented by the NuGet package Microsoft.AspNet.SignalR and runs on applications using .NET Framework and System.Web. ASP.NET Core SignalR is part of the ASP.NET Core platform which runs on both .NET Core and .NET Framework and uses the NuGet package Microsoft.AspNetCore.App.

                        Support for the Azure SignalR Service

                        This year, we’re planning to release version 2.4.0 of ASP.NET SignalR. This version will contain bug fixes as always, but will also include one major new feature: Support for the Azure SignalR Service.

                        The Azure SignalR Service is a managed service that handles scaling for SignalR-based applications. In May, we released this service into public preview with support for ASP.NET Core SignalR, and we’re pleased to announce that support for ASP.NET SignalR is also coming to the Azure SignalR Service. We expect the first preview of this to be available in Fall 2018, along with a preview release of SignalR 2.4.0. In order to migrate your application to use the Azure SignalR service, you will need to update both the client and the server to ASP.NET SignalR 2.4.0.

                        We’re still in the early stages of work on this feature, so we don’t have specific examples yet. However, this support will be similar to how ASP.NET Core SignalR supports the Azure SignalR Service. With minimal modifications to your server application, you will be able to enable support for the service. Once you’re using the Azure SignalR Service, your server application no longer has to manage all of the individual connections. Moving to the service also means you no longer require a scale-out system (such as Redis, Service Bus or SQL Server), as the service handles scaling for you.

                        Also, as with ASP.NET Core SignalR, as long as your clients are using the latest version of the SignalR Client (2.4.0), they will be able to connect via the service without modification.

                        If you’re interested in working with the Azure SignalR Service to migrate to this service and provide feedback, please contact the team by email at asrs@microsoft.com.

                        Client supported frameworks

                        Another thing we are planning to do in 2.4.0 is simplify the supported frameworks for the SignalR .NET Client. Our plan for 2.4.0 is to move the client to support:

                        • .NET Standard 2.0 (which includes Xamarin, and the Universal Windows Platform)
                        • .NET Framework 4.5 and higher

                        This does mean that version 2.4.0 of the .NET client will no longer support Windows 8, Windows Phone, or Silverlight applications. We want some feedback on this though, so if you still have applications using the SignalR client that run on these platforms, let us know by commenting on the GitHub issue tracking this part of the work.

                        Handling the backlog of issues

                        While we didn’t forget about ASP.NET SignalR, we did let things get a little messy over there while we were building ASP.NET Core. If you look at the issue tracker you’ll see we have (at the time of publishing) over 500 open issues. We didn’t do a great job keeping on top of the backlog there. In order to get a handle on things, we’re going to have to declare a kind of “Issue Bankruptcy” to get back on top of the work.

                        So, in order to get back on top of the backlog, we’re going to close all issues that were opened prior to January 1st, 2018. This does not mean we’re not interested in fixing them, it just means they’ve gotten stale and we aren’t able to follow up on them. If one of these issues is affecting you, please feel free to open a new issue with the details and we will be happy to review it. There are a lot of issues in the tracker that are simply no longer relevant, or may have been addressed in a previous release.

                        Our priorities moving forward

                        With the release of ASP.NET Core SignalR, it’s necessary to talk about how the team is going to be prioritizing our work on the two variants of SignalR. Our plan moving forward is to shift our focus towards ASP.NET Core SignalR. This means that after the 2.4.0 release, we won’t be investing resources in new features for ASP.NET SignalR. However, this does not mean ASP.NET SignalR will be left unsupported. We will continue to respond to bug reports and fix critical or security-related issues. You should continue to feel confident using ASP.NET SignalR in your applications. Microsoft will also continue to provide product support services for ASP.NET SignalR.

                        Collaborating with customers and partners to deliver a modern desktop: Microsoft Managed Desktop

                        $
                        0
                        0

                        We have consistently heard from our customers—both large and small—that they struggle to keep up with the pace of changes in technology. They feel pulled between the requirement to stay secure and up to date against the need to drive more business value. They are challenged to deliver the great user experiences that employees want and expect. And the sophistication of today’s security threats requires organizations to re-think how they deploy, manage, and secure assets for their users.

                        The cloud has dramatically changed the way in which we can deliver, manage, and update devices, which creates an opportunity to think about how we deliver a modern desktop with Microsoft 365 in new and different ways.

                        Today, we are announcing Microsoft Managed Desktop (MMD), a new initial offering that brings together Microsoft 365 Enterprise, device as a service, and cloud-based device management by Microsoft. MMD enables customers to maximize their IT organizations’ focus on their business while Microsoft manages their modern desktops.

                        Great experience with Microsoft 365 on modern devices—Our goal with MMD is to provide a great experience for users while keeping devices secure and up to date. MMD relies on the power of Microsoft 365, running in a consistent, lightweight, reference architecture that continues to evolve to allow our customers to take full advantage of our intelligent security capabilities to protect them from nascent threats. Importantly, MMD is built on modern devices that meet our specification and runtime quality bar.

                        Analytics benefit all customers—Analytics are at the heart of MMD. We leverage analytics to provide operational and security insights and learnings, so we can constantly monitor and improve, as well as enable us to manage the global MMD device population. As an example, we use insights and AI to determine which devices are ready for feature updates or, conversely, whether a specific app is blocking a device’s ability to update so we can act.

                        Customer and partner insight and feedback—Customer feedback and insight are also at the heart of MMD. We have deployed MMD in a measured approach with a set of early customers, leading to hundreds of changes in Microsoft 365 to better enable end-to-end scenarios for customers around the world. We are delighted to be working in partnership with Lloyds Banking Group to deploy MMD, as well as the Seattle Reign FC. These organizations are united by their desire to transform, to modernize the user experience and shift to a modern desktop. They have been partnering with us to learn from, expand, and develop the MMD offering so that we can bring it to more customers and markets in the future. We are also partnering with key strategic partners like Dell, HP, DXC, HCL, Computacenter, and Accenture/Avanade in our MMD journey. We see great opportunities for our partner ecosystem to expand their existing Microsoft 365 activities and provide devices and experiences alongside MMD.

                        Today, we are live with MMD with a small number of customers in the U.K. and the U.S., and are starting operations in Canada, Australia, and New Zealand in early 2019. We will continue to learn from these initial customers and use that insight to evolve and improve both Microsoft 365 and MMD. From there we plan to expand to several other geographies in the second half of 2019.

                        We believe that MMD will be an option that allows organizations to fundamentally shift how they think about and manage their IT. Through MMD, customers will be able to move toward a secure, always up-to-date environment with device management by Microsoft. As we expand the offering, our partners will play a key role in helping us bring MMD to market and support customers in their transition to a modern desktop. We encourage customers who are interested in MMD to contact their local Microsoft account manager as we work to broaden the offering.

                        The post Collaborating with customers and partners to deliver a modern desktop: Microsoft Managed Desktop appeared first on Microsoft 365 Blog.

                        ASP.NET Core in Visual Studio for Mac – Help us build the best experience

                        $
                        0
                        0

                        We are working to improve the experience for ASP.NET Core developers in Visual Studio for Mac. If you are working on ASP.NET Core apps in Visual Studio for Mac, we would love to hear you feedback. Your feedback is important so that we can help shape the future of ASP.NET Core in Visual Studio for Mac.

                        At the end of the survey, you can leave your name and email address (optional), so that a member of the team can reach out to you to get more details. The survey should take less than 5 minutes to be completed.

                        Take the survey now

                        Thanks,
                        Sayed Ibrahim Hashimi
                        @SayedIHashimi

                        Viewing all 5971 articles
                        Browse latest View live