Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Microsoft Azure portal March 2019 update

$
0
0

This month’s updates include an improved “All services” view, Virtual Network Gateway overview updates, an improved DNS Zone and Load Balancer creation experience, Management Group integration into Activity Log, redesigned overview screens for certain services within Azure DB, an improved creation experience for Azure SQL Database, multiple changes to the Security Center, and more updates to Intune.

Sign in to the Azure portal now and see for yourself everything that’s new. Download the Azure mobile app to stay connected to your Azure resources anytime, anywhere.

Here’s the list of March updates to the Azure portal:

Shell

IaaS

Management experiences

SQL

Azure Security Center

Other

Shell

Improved “All services” view

We have improved the “All services” view, the view that shows all available services and resources in Azure:

  • The entire screen’s real estate is now utilized to show more services.
  • A category index has been added at the left to help navigate the Azure offering.

Screenshot of the All services view

IaaS

Virtual network gateway overview updates

We've made significant updates to the overview page for virtual network gateways. We've added shortcut tiles in the center of the page to make it easier to find troubleshooting tools, and we've added a tile that brings up documentation so you can quickly learn more about your resource. We've also added metric charts so you can see at a glance what the tunnel ingress and egress are for your gateway.

Screenshot of a Virtual Network Gateway overview

Go to any virtual network gateway resource to try out the changes.

Improved creation experience for DNS Zones and Load Balancer

We are continuing our efforts to bring improved and consistent instance creation experiences for our top-level resources. As part of that effort, we’ve just updated DNS Zones and Load Balancer. This updated flow eliminates horizontal scrolling during the creation workflow and follows the same UI patterns that we use in other popular services like Virtual Machine, Storage, Cosmos DB, and Azure Kubernetes Service, resulting in easier to learn and better customer experiences.

Screenshot of DNS Zones FSC in the Azure portal

Screenshot of Load Balancer FSC in the Azure portal

  1. Bring up either the DNS zone or Load Balancer resource in the Azure portal
  2. Select Add to launch the new create experience

Management experiences

Management Group integration into Activity Log

Azure Management Groups provide a level of scope above subscriptions and is being adopted across the Azure Portal.  Users had been asking to view Management Groups events in the Activity Log, and now, integration of Management Group events and filtering into the Activity Logs allows users to audit their Management Groups.  An authorized user of a Management Group can go to the Activity Log and see all actions that have happened on a Management Group such as create, edit, delete, and parent change. In addition, you can now audit Policy Assignments on Management Groups.

Screenshot of access to Management Groups and the Activity Log

  1. If you have access to Management Groups in your current tenant, simply navigate to the Activity Log.
  2. Select what Management Group you want to filter by using the first pill on the left.

SQL

Redesigned overview blades for Azure Database for MySQL, PostgreSQL, and MariaDB services

We have redesigned the overview blade for MySQL, PostgreSQL, and MariaDB, which provides an at-a-glance understanding of the status of your server. It is also aligned with the overview design of Azure SQL Database, Elastic Pools, Managed Instance, and Data Warehouse. In the overview, you can now see the resource usage over the last hour, common tasks, features available, and whether the features have been configured. Clicking on any of these tiles in the overview takes you to the full details and settings.

Screenshot of a redesigned overview blade for Azure Database for PostgreSQL server

  1. Select All Services
  2. Search and select either Azure Database for MySQL, Azure Database for PostgreSQL, or Azure Database for MariaDB
  3. Select any server from the list
  4. Observe the overview blade

Improved creation experience for Azure SQL Database

We are continuing our efforts to bring improved and consistent creation experiences for our top-level resources. As part of that effort, we’ve just updated the SQL database create workflow. This updated flow eliminates horizontal scrolling during the creation workflow and follows the same UI patterns that we use in other popular services like VM, Storage, Cosmos DB and AKS, resulting in easier to learn and better customer experiences.

Screenshot displaying the creation of a SQL Database

    Azure Security Center

    Secure score as a dashboard KPI

    Secure score is now the main compliance KPI in the Azure Security Center dashboard, replacing the previous percentage-based compliance metric.

    New regulatory compliance dashboard

    The new Azure Security Center regulatory compliance dashboard helps streamline the process for meeting regulatory compliance requirements by providing insights into your compliance posture. The information provided is based on continuous assessments of your Azure environment.

    Updated security policies

    We are updating Azure Security Center policies to use Azure Policy. You will be migrated automatically, no action is required by you. For more information see our documentation, “Working with security policies.”

    Updated security recommendations

    Azure App Service security recommendations have been improved to provide greater accuracy and environment compatibility. For more information see out documentation, “Protecting your machines and applications in Azure Security Center.”

    Other

    Updates to Microsoft Intune

    The Microsoft Intune team has made updates to Microsoft Intune. You can find them on the What's new in Microsoft Intune page.

    Did you know?

    We now have several new videos in the recently launched Azure portal “how to” video series!  This weekly series highlights specific aspects of the portal so you can be more efficient and productive while deploying your cloud workloads from the portal. Recent videos include a demonstration of how to create, share, and use dashboards, how to manage virtual machines while on the go using the Azure mobile app, and how to configure a virtual machine with the Azure portal. Keep checking in to our playlist on YouTube for a new video each week.

    Next steps

    The Azure portal’s large team of engineers always wants to hear from you, so please keep providing us with your feedback in the comments section below or on Twitter @AzurePortal.

    Don’t forget to sign in the Azure portal and download the Azure mobile app today to see everything that’s new. See you next month!


    Maximize existing vision systems in quality assurance with Cognitive AI

    $
    0
    0

    Quality assurance matters to manufacturers. The reputation and bottom line of a company can be adversely affected if defective products are released. If a defect is not detected, and the flawed product is not removed early in the production process, the damage can run in the hundreds of dollars per unit. To mitigate this, many manufacturers install cameras to monitor their products as they move along the production line. But the data may not always be useful. For example, cameras alone often struggle with identifying defects at high volume of images moving at high speed. Now, a solution provider has developed a way to integrate such existing systems into quality assurance management. Mariner, with its Spyglass solution, uses AI from Azure to achieve visibility over the entire line, and to prevent product defects before they become a problem.

    Quality assurance expenses

    Quality assurance (QA) management in manufacturing is time-consuming and expensive, but critical. The effects of poor quality are substantial, as they result in:

    • Re-work costs
    • Production inefficiencies
    • Wasted materials
    • Expensive and embarrassing recalls 

    And worst of all, dissatisfied customers that demand returns. 

    Multiple variables across multiple facilities

    Too many variables make product defect analysis and prediction difficult. Manufacturers need to perform a root cause analysis across a manufacturing process that has complex variables. They want to determine which combinations of variables create high-quality products versus those that create inferior products. But to achieve this precision, the manufacturer needs to aggregate data across multiple systems to return a comprehensive view.

    Legacy vision systems lack the precision of AI-based defect detection systems. Manufacturing processes can be incredibly complex, and older vision systems are often unable to consistently and accurately identify small flaws that may have a large impact on customer satisfaction. Also, false positives can bog down production schedules.

    Additionally, the inability to aggregate data from multiple production lines or factories to determine the cause of variations in quality across multiple sites prevents a holistic view of operational efficiency.

    Integrating legacy systems and AI on Azure

    Spyglass Visual Inspection, powered by Microsoft Azure is an easily implemented, rapid time-to-value QA solution that can reduce costs associated with product defects and increase customer satisfaction. Manufacturers work with images from any vision system so companies who already have systems in place can leverage them for additional return-on-investment (ROI).

    By using cameras and other devices already in use on the production floor, the solution takes a lean approach to implementing new and emerging technologies like IoT, Cognitive AI, and computer vision. This ensures that manufacturers control costs and achieve value at every stage of production.

    Flowchart display of Spyglass Reference Architecture

    The figure above outlines the architecture of the solution. Data from existing systems is placed at the front. Edge computing provides on-premises processing. The data moves to storage on Azure, where it is further processed. AI can then be applied, and the results viewed using Power BI for insights into the system.

    Benefits

    Spyglass Visual Inspection harnesses the power of AI, IoT, and machine vision. The result is that manufacturers minimize defects and reduce costs through advanced analytics. For the manufacturer, the benefits that matter are:

    • Rapid ROI: Easy implementation and ramp-up enables immediate process improvements and a rapid return on your investment.
    • Greater visibility: Predictive analytics and root cause analysis drive quality improvements across multiple lines or sites.
    • Leverages existing vision systems: Extracts more value from existing industrial cameras and devices by augmenting them with AI-driven real-time insights.

    Azure services

    Spyglass Visual Inspection is powered by Microsoft Azure. It leverages the following Azure services:

    • Microsoft Deep Learning Virtual Machine, A neural network extracts rich information from images to identify defects.
    • Azure IoT Edge ingests images from industrial cameras on the production line and runs cloud AI algorithms locally.
    • Azure IoT Hub receives images, meta data from images, and results from the defect detection analysis on the Edge.
    • Azure Stream Analytics enables users to create dashboards that offer deep insights into the types and causes of defects that are occurring across a massive number of variables.
    • Azure Data Lake Storage/Blob Storage stores the data. Because heterogeneous data from multiple streams can be stored, additional data types can be added to image-based analysis.
    • Azure SQL Database is used to store the business rules that define what a good or bad product is and what alerts should be generated in the analytics.
    • Azure Functions/Service Bus generates rules that trigger alerts so you can capture the most meaningful data for business users.
    • Power BI provides interactive dashboards that make data easy to access and understand, so users can make analytics-driven decisions.
    • Power Apps creates additional applications for manufacturers to act on the data and insights they have received.

    Recommended next steps

    Go to the marketplace listing for Spyglass and select Contact me.

    Code more, scroll less with Visual Studio IntelliCode

    .NET Core Container Images now Published to Microsoft Container Registry

    $
    0
    0

    We are now publishing .NET Core container images to Microsoft Container Registry (MCR). We have also made other changes to the images we publish, described in this post.

    Important: You will need to change FROM statements in Dockerfile files and docker pull commands as a result of these changes. 3.0 references need to be changed now. Most 1.x and 2.x usages can be changed over time. The new tag scheme is decribed in this post and are provided at the microsoft-dotnet-core repo, our new home on Docker Hub.

    Summary of changes:

    • .NET Core images are now published to Microsoft Container Registry.
    • Updates will continue to be published to Docker Hub, for .NET Core 1.x and 2.x.
    • .NET Core 3.0 will only be published to MCR.
    • Nano Server 2016 images are no longer supported or published.

    Microsoft Container Registry (MCR)

    Microsoft teams are now publishing container images to MCR. There are two key reasons for this change:

    • We can establish MCR as the official source of Microsoft-provided container images, and then more easily promote and syndicate those images to multiple container services, like Docker Hub and Red Hat OpenShift.
    • We can use Microsoft Azure as a global content distribution network (CDN) for delivering Microsoft-provided container images from locations closer to you. This means your container images pulls will be faster and have improved reliability in many cases.

    From an architectural perspective, MCR is a globally replicated service that handles image manifest requests. It uses the Azure CDN service for image layer requests. This separation isn’t observable with docker pull, but it is easy to see when you inspect .NET Core images with curl. The use of globally replicated resources helps to demonstrate our commitment to providing a great experience for container users across the world.

    Continuing to Support Docker Hub

    We will continue to maintain Docker Hub repo pages so that you can discover and learn about .NET Core images. The Docker Hub website URLs you’ve used for Microsoft repos will continue to work, and forward to updated locations on Docker Hub.

    You will use and see MCR as the storage backend for Microsoft container images, BUT the primary way you learn about Microsoft container images and tags will be through a container hub or website, which for many users will continue to be Docker Hub.

    Existing Docker Hub images will be maintained as-is. In fact, we will continue to update the existing microsoft/dotnet repo, as described later in this post.

    .NET Core Images are on MCR

    We started publishing images to MCR in February 2019, starting with the .NET Core “nightly” repo. In early March, we moved the .NET Core repo as well.

    On Docker Hub, we had one very large repo that served up four image types for four operating system distributions and three CPU types. This broad set of tags made for very long tag names and even longer README files. We decided to take this opportunity to re-factor .NET Core into multiple repos, one for each image type. We also added a “product repo” that groups all of our repos together.

    The new repos follow:

    Note: .NET Core 2.0 is end-of-life (EOL) therefore it will not be available on MCR, only on Docker Hub, as unsupported images. You’ll want to move to .NET Core 2.1 which is a Long-term Support (LTS) release.

    Update .NET Core Image Tags

    The following examples show you what the new docker pull tag strings look like for .NET Core. They are shown as docker pull, but the same strings need to be used in Dockerfile files for FROM statements.

    These examples all target .NET Core 2.1, but the same pattern is used across any supported .NET Core version:

    • SDK: docker pull mcr.microsoft.com/dotnet/core/sdk:2.1
    • ASP.NET Core Runtime: docker pull mcr.microsoft.com/dotnet/core/aspnet:2.1
    • .NET Core Runtime: docker pull mcr.microsoft.com/dotnet/core/runtime:2.1
    • .NET Core Runtime Dependencies: docker pull mcr.microsoft.com/dotnet/core/runtime-deps:2.1

    The following example demonstrates what a FROM statement looks like for the new MCR repos, using the dotnet/core/sdk repo as an example:

    FROM mcr.microsoft.com/dotnet/core/sdk:2.1

    If you use Alpine, for example, the tags are easily extended to include Alpine, using the dotnet/core/runtime repo as an example:

    FROM mcr.microsoft.com/dotnet/core/runtime:2.1-alpine

    You can look at the .NET Core Docker Samples to see how the tag strings are used in practice.

    Continued Support for Docker Hub

    We’ve been publishing images to Docker Hub for three or four years. There are likely thousands (if not millions) of scripts and Dockerfiles that have been written that expect .NET container images on Docker Hub. As stated above, those artifacts will continue to work as-is.

    We publish multiple forms of tags that provide varying levels of convenience and consistency. These differences pivot on the degree to which version numbers are specified, from completely specified to not present at all. The following example tags demonstrate the various forms of tags, from least- to most-specific:

    • latest
    • 2.2-runtime
    • 2.1.6-sdk

    We will continue publishing images for the first two tags forms (version-less and two-part versions) for the supported lifetime of the associated versions. We will not publish any new three-part versions (like the last example) to Docker Hub, but only to MCR. We expect that most scripts and Dockerfile files use either of the first two forms of tags, or are manually updated to adopt three-part tags on some regular cadence. If they are manually updated, they can be manually updated to pull images from MCR.

    .NET Core 3.0 Images

    The move to MCR is happening part-way through the .NET Core 3.0 release, which gave us the option of making .NET Core 3.0 MCR-only. This makes our approach for MCR for .NET Core 3.0 different than the other supported versions. We initially published .NET Core 3.0 Preview images to Docker Hub. Starting with .NET Core 3.0 Preview 3, .NET Core 3.0 images will only be published to MCR. It is important that .NET Core 3.0 users transition to MCR soon.

    The following are examples of .NET Core 3.0 tag strings, to help you move to MCR:

    • SDK: docker pull mcr.microsoft.com/dotnet/core/sdk:3.0
    • ASP.NET Core Runtime: docker pull mcr.microsoft.com/dotnet/core/aspnet:3.0
    • .NET Core Runtime: docker pull mcr.microsoft.com/dotnet/core/runtime:3.0
    • .NET Core Runtime Dependencies: docker pull mcr.microsoft.com/dotnet/core/runtime-deps:3.0

    The following example demonstrates what a FROM statement looks like for .NET Core 3.0 on MCR, using the dotnet/core/runtime repo as an example:

    FROM mcr.microsoft.com/dotnet/core/runtime:3.0

    .NET Core 3.0 Preview 1 and Preview 2 images will remain available on Docker Hub, for three-part version tags. For Preview 1 and Preview 2, we also published two-part version tags, like 3.0-sdk and 3.0-runtime. We were concerned that some users would see those two-part version tags for .NET Core 3.0 on Docker Hub and believe that those were supported images and would be updated in the future. They will not be. To mitigate that, we deleted two-part version tags for 3.0 on Docker Hub. This approach enables us to clearly communicate during the preview period that everyone needs to move to MCR for 3.0 images as soon as possible. We appologize if this change negatively affected you.

    Visual Studio 2019 Previews use the two-part 3.0 tags that were deleted. Users must update their Dockerfile files to ensure that their projects build correctly. We have provided a sample Dockerfile that provides the correct FROM statements for .NET Core 3.0 ASP.NET Core projects in Visual Studio 2019.

    Nano Server 2016 Images

    Nano Server 2016 is no longer supported by the Windows team and they are no longer publishing updated container images for that version. As a result, we have stopped publishing Nano Server 2016 images to Docker Hub and MCR.

    This affects .NET Core image tags in a few different ways:

    • Manifest (AKA “multi-arch“) tags no longer include an entry for Nano Server 2016. That means that manifest tags like 2.1-sdk will no longer work on Windows Server 2016, Nano Server 2016, or Windows 10 1607. If you need to still use Nano Server 2016 based images (even though they are no longer supported), you will need to use tags that include the Windows version (these are non-manifest tags), for example mcr.microsoft.com/dotnet/core/runtime:2.1-nanoserver-sac2016.
    • .NET Core 2.x and 3.0 images are supported and available for all supported versions of Nano Server starting with version 1709. This means that the 2.x and 3.0 manifest tags can be used on Windows 10, version 1709+, and Windows Server, version 1709+. You can also use non-manifest tags for those versions, too.
    • We only produce Nano Server, version 1809 images for .NET Core 1.x. Previously, we only produced Nano Server, version 2016 images for .NET Core 1.x. You would have used either a manifest tag (like 1.1-runtime or 1.1) or a nanoserver-sac2016 tag to pull those images. You can pull the new .NET Core 1.x Nano Server, version 1809 images by either using a manifest tag or a nanoserver-1809 tag. These tags are only supported on Windows 10, version 1809 and Windows Server 2019

    .NET Core images for Nano Server 2016 are still available on Docker Hub and MCR and will not be deleted. You can continue to use them but they are not supported and will not get new updates. If you need to do this and previously used manifest tags, like 1.1-sdk, you can now use the following MCR tags (Docker Hub variants are similar):

    • 2.2.2-nanoserver-sac2016, 2.2-nanoserver-sac2016
    • 2.1.8-nanoserver-sac2016, 2.1-nanoserver-sac2016
    • 1.1.11-nanoserver-sac2016, 1.1-nanoserver-sac2016
    • 1.0.14-nanoserver-sac2016, 1.0-nanoserver-sac2016

    Please note that .NET Core 1.x will go out of support on June 27, 2019. We recommend that .NET Core 1.x users move to .NET Core 2.1.

    At DockerCon 2019

    We are sending a few team members to DockerCon 2019. Contact us @ dotnet@microsoft.com if you’d like to meet up and talk how you use .NET and Docker together. We’d love to hear about your approach and any challenges that you face, or changes you’d like us to make.

    We’ve been attending DockerCon for a few years now and always enjoy the show. It is a good opportunity to learn the new ways people are using containers and also what new features are coming. As an example, we’re still waiting for official support for BuildKit. It is the feature we most want to see become part of the default feature set of Docker.

    Closing

    We are continuing to improve the experience using .NET Core container images from Microsoft. Publishing .NET Core container images to MCR will be an improvement since MCR is globally replicated.

    .NET Framework container images are not yet available on MCR. We will be moving them to MCR shortly.

    See Using .NET and Docker Together if you want to learn more about using Docker.

    The post .NET Core Container Images now Published to Microsoft Container Registry appeared first on .NET Blog.

    Top Stories from the Microsoft DevOps Community – 2019.03.15

    $
    0
    0

    I’ve been building tools for Azure DevOps for fifteen years and yes, in case you were wondering, saying that does make me feel old. But more importantly: I’m still learning new things about it that I didn’t know. That’s why I’m so happy to read all these articles every week. It’s not just about the cool things that people are doing, it’s also about the helpful tips that can make you more productive. Here’s some great articles that I found this week:

    How to build pinball high score with Azure DevOps
    Who’s got the high score on your pinball game? Back in my day all you had were three little letters. Panu Oksala has brought the pinball game’s scoreboard to the next level: by using Azure Boards he’s got an amazing top scores list on the Azure DevOps Dashboard.

    Use your own build container image to create containerized apps
    The Azure Pipelines build agents have a ton of tools pre-installed so that you can use them to build your application… but what if your requirements are really intense? Yuri Burger brings a container with the dependencies installed. It’s containers all the way down!

    Azure Pipelines Building GitHub Repositories By Example
    We moved from the VSTS product to the Azure DevOps family of products so that we could offer each product to shine on its own. Florian Rappl shows why this is great by using Azure Pipelines to build his GitHub repositories.

    Azure for sure…
    I’ve got a confession to make: although I’ve worked at Microsoft for a bunch of years, I’m actually a Unix guy at heart. That’s why I love our hosted macOS and Linux build agents, and I’m happy to see people like Steve Quirke use Azure Pipelines for their Mac builds.

    Find work Items in Azure DevOps: was ever operator
    If you have a lot of work items in your Azure Boards (like I do) then you know that sometimes you need to find a work item but you can’t remember details enough to find it. Wouldn’t it be great if you could search a work item based on what a field used to be? Ricci Gian Maria explains that you can.

    As always, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure then let me know! I’m @ethomson on Twitter.

    The post Top Stories from the Microsoft DevOps Community – 2019.03.15 appeared first on Azure DevOps Blog.

    Xbox Avatar accessories for People with Diabetes! Sponsored by Nightscout and Konsole Kingz

    $
    0
    0

    My Xbox user name is Glucose for a reason.

    This is a passion project of mine. You've likely seen me blog about diabetes for many many years. You may have enjoyed my diabetes hacks like lighting up my keyboard keys to show me my blood sugar, or some of the early work Ben West and I did to bridge Dexcom's cloud with the NightScout open source diabetes management system.

    Recently Xbox announced new avatars! They look amazing and the launch was great. They now have avatars in wheelchairs, ones with artificial limbs, and a wide variety of hair and skin tones. This is fantastic as it allows kids (and adults!) to be seen and be represented in their medium of choice, video games.

    I was stoked and immediately searched the store for "diabetes." No results. No pumps, sensors, emotes, needles, nothing. So I decided to fix it.

    NOW AVAILABLE: Go and buy the Nightscout Diabetes CGM avatar on the Xbox Store now!

    I called two friends - my friends at the Nightscout Foundation, dedicated to open source and open data for people with diabetes, as well as my friends at Konsole Kingz, digital avatar creators extraordinaire with over 200 items in the Xbox store from kicks to jerseys and tattoos.

    And we did it! We've added our first diabetes avatar top with some clever coding from Konsole Kingz, it is categorized as a top but gives your avatar not only a Nightscout T-Shirt with your choice of colors, but also a CGM (Continuous Glucose Meter) on your arm!

    Miss USA has a CGMFor most diabetics, CGMs are the hardware implants we put in weekly to tell us our blood sugar with minimal finger sticks. They are the most outwardly obvious physical manifestation of our diabetes and we're constantly asked about them. In 2017, Miss USA contestant Krista Ferguson made news by showing her CGM rather than hiding it. This kind of visible representation matters to kids with diabetes - it tells them (and us) that we're OK.

    You can find the Nightscout CGM accessory in a nuimber of ways. You can get it online at the Xbox Avatar shop, and when you've bought it, it'll be in the Purchased Tab of the Xbox Avatar Editor, under Closet | Tops.

    You can even edit your Xbox Avatar on Windows 10 without an Xbox! Go pick up the Xbox Avatar Editor and install it (on both your PC and Xbox if you like) and you can experiment with shirt and logo color as well.

    Consider this a beta release. We are working on improving resolution and quality, but what we really what to know is this - Do you want more Diabetes Xbox Avatar accessories? Insulin pumps on your belt? An emote to check your blood sugar with a finger stick?

    Diabetes CGM on an Xbox avatar

    If this idea is a good one and is as special to you and your family (and the gamers in your life with diabetes) please SHARE it. Share it on social media, tell your friends at the news. Profits from this avatar item will go to the Nightscout Foundation!


    Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.


    © 2018 Scott Hanselman. All rights reserved.
         

    Join Microsoft at the NVIDIA GPU Technology Conference

    $
    0
    0

    The world of computing goes deep and wide on working on issues related to our environment, economy, energy, and public health systems. These needs require modern, advanced solutions that were traditionally limited to a few organizations, are hard to scale, and take a long time to deliver. Microsoft Azure delivers High Performance Computing (HPC) capability and tools to power solutions that address these challenges integrated into a global-scale cloud platform. 

    Whether it’s a manufacturer running advanced simulations, an energy company optimizing drilling through real-time well monitoring, or a financial services company using AI to navigate market risk  Microsoft’s partnership with NVIDIA makes access to NVIDIA GPUs easier than ever.

    Join us in San Jose next week at NVIDIA’s GPU Technology Conference to learn how Azure customers combine the flexibility and elasticity of the cloud with the capability of NVIDIA GPUs. We will share examples of work we’ve done in oil & gas, automotive, artificial intelligence, and much more. Also, be on the lookout for new and exciting integrations between Azure AI and NVIDIA that bring GPU acceleration to more developers.

    Microsoft sessions at the conference include:

    If you are participating in any of the many NVIDIA DLI training classes, you will get a chance to experience firsthand the breadth of Azure GPU compute options through the interactive classes which are now Azure GPU powered.

    Please come by and say “hello” at our Microsoft Booth – 1122 - where Microsoft and partners (including Teradici and Workspot) will have demos of customer use cases and we will have experts on hand to talk about how Azure is the cloud for any GPU workload.  Additionally, we will be demoing Microsoft Bing – which uses the power of NVIDIA GPUs on Azure to execute a variety of tasks such as generating instant answers to complex questions and analyzing images to help you find similar looking items or products.

    As you can see, NVIDIA GPUs are a key part of the Microsoft High Performance Computing strategy that Azure customers rely on to drive innovation.

    We’re looking forward to talking to you next week.

    Microsoft Azure Government is First Commercial Cloud to Achieve DoD Impact Level 5 Provisional Authorization, General Availability of DoD Regions

    $
    0
    0

    Furthering our commitment to be the most trusted cloud for Government, today Microsoft is proud to announce two milestone achievements in support of the US Department of Defense.

    Information Impact Level 5 DoD Provisional Authorization by the Defense Information Systems Agency

    Azure Government is the first commercial cloud service to be awarded an Information Impact Level 5 DoD Provisional Authorization by the Defense Information Systems Agency. This provisional authorization allows all US Department of Defense (DoD) customers to leverage Azure Government for the most sensitive controlled unclassified information (CUI), including CUI of National Security Systems. 

    DoD Authorizing Officials can use this Provisional Authorization as a baseline for input into their authorization decisions on behalf of mission owner systems using the Azure Government cloud DOD Region. 

    This achievement is the result of the collective efforts of Microsoft, DISA and its mission partners to work through requirements pertaining to the adoption of cloud computing for infrastructure, platform and productivity across the DoD enterprise.

    General Availability of DoD Regions

    Information Impact Level 5 requires processing in dedicated infrastructure that ensures physical separation of DoD customers from non-DoD customers. Over the past few months, we ran a preview program with more than 50 customers across the Department of Defense, including all branches of the military, unified combatant commands and defense agencies.

    We are thrilled to announce the general availability of the DOD Region to all validated DoD customers. Key services covering compute, storage, networking and database are available today with full service level agreements and dedicated Azure Government support.

    Dave Milton, Chief Technology Officer for Permuta Technologies, a leading provider of business solutions tailored for the military affirmed the significance of the general availability of the Azure DoD regions, saying:

    “Azure Government DOD Regions has given us the ability to deploy our SaaS offering, DefenseReady Cloud, to the US Department of Defense in a scalable, secure, and cost-effective environment. The mission-critical nature of DefenseReady Cloud requires high availability, compliance with DoD’s SRG Impact Level 5 requirements, and scalability to support our customers changing demand, with a flexible pricing structure that allow us to offer capability to large enterprises as well as local commands. With Azure Government DOD Region, we are now able to onboard a customer in weeks, not months, allowing for a time-to-value that is unparalleled when compared with on-premises or other government-sponsored options. Through our partnership, Microsoft provided direct access to product group engineers, compliance support, training, and other resources needed to bring our SaaS solution to DoD.”

    These accomplishments and the commentary of our customers and partners further reinforce our commitment to, and the strength of, our long-standing partnership with the US Department of Defense. For more information on Microsoft Cloud for Government services with Information Impact Level 5 provision authorization visit the Microsoft in Government blog, and for more detail on the Information Impact Level 5 Provision authorization (including in-scope services), please visit our Microsoft Trust Center.

    To get started today, customers and mission partners may request access to our Azure Government Trial program.


    Announcing TypeScript 3.4 RC

    $
    0
    0

    Today we’re happy to announce the availability of our release candidate (RC) of TypeScript 3.4. Our hope is to collect feedback and early issues to ensure our final release is simple to pick up and use right away.

    To get started using the RC, you can get it through NuGet, or use npm with the following command:

    npm install -g typescript@rc

    You can also get editor support by

    Let’s explore what’s new in 3.4!

    Faster subsequent builds with the
    --incremental
    flag

    Because TypeScript files are compiled, it introduces an intermediate step between writing and running your code. One of our goals is to minimize built time given any change to your program. One way to do that is by running TypeScript in

    --watch
    mode. When a file changes under
    --watch
    mode, TypeScript is able to use your project’s previously-constructed dependency graph to determine which files could potentially have been affected and need to be re-checked and potentially re-emitted. This can avoid a full type-check and re-emit which can be costly.

    But it’s unrealistic to expect all users to keep a

    tsc --watch
    process running overnight just to have faster builds tomorrow morning. What about cold builds? Over the past few months, we’ve been working to see if there’s a way to save the appropriate information from
    --watch
    mode to a file and use it from build to build.

    TypeScript 3.4 introduces a new flag called

    --incremental
    which tells TypeScript to save information about the project graph from the last compilation. The next time TypeScript is invoked with
    --incremental
    , it will use that information to detect the least costly way to type-check and emit changes to your project.
    // tsconfig.json
    {
        "compilerOptions": {
            "incremental": true,
            "outDir": "./lib"
        },
        "include": ["./src"]
    }

    By default with these settings, when we run

    tsc
    , TypeScript will look for a file called
    .tsbuildinfo
    in our output directory (
    ./lib
    ). If
    ./lib/.tsbuildinfo
    doesn’t exist, it’ll be generated. But if it does,
    tsc
    will try to use that file to incrementally type-check and update our output files.

    These

    .tsbuildinfo
    files can be safely deleted and don’t have any impact on our code at runtime – they’re purely used to make compilations faster. We can also name them anything that we want, and place them anywhere we want using the
    --tsBuildInfoFile
    flag.
    // front-end.tsconfig.json
    {
        "compilerOptions": {
            "incremental": true,
            "tsBuildInfoFile": "./buildcache/front-end",
            "outDir": "./lib"
        },
        "include": ["./src"]
    }

    As long as nobody else tries writing to the same cache file, we should be able to enjoy faster incremental cold builds.

    Composite projects

    Part of the intent with composite projects (

    tsconfig.json
    s with
    composite
    set to
    true
    ) is that references between different projects can be built incrementally. As such, composite projects will always produce
    .tsbuildinfo
    files.

    outFile

    When

    outFile
    is used, the build information file’s name will be based on the output file’s name. As an example, if our output JavaScript file is
    ./output/foo.js
    , then under the
    --incremental
    flag, TypeScript will generate the file
    ./output/foo.tsbuildinfo
    . As above, this can be controlled with the
    --tsBuildInfoFile
    flag.

    The
    --incremental
    file format and versioning

    While the file generated by

    --incremental
    is JSON, the file isn’t mean to be consumed by any other tool. We can’t provide any guarantees of stability for its contents, and in fact, our current policy is that any one version of TypeScript will not understand
    .tsbuildinfo
    files generated from another version.

    Improvements for
    ReadonlyArray
    and
    readonly
    tuples

    TypeScript 3.4 makes it a little bit easier to use read-only array-like types.

    A new syntax for
    ReadonlyArray

    The

    ReadonlyArray
    type describes
    Array
    s that can only be read from. Any variable with a handle to a
    ReadonlyArray
    can’t add, remove, or replace any elements of the array.
    function foo(arr: ReadonlyArray<string>) {
        arr.slice();        // okay
        arr.push("hello!"); // error!
    }

    While it’s often good practice to use

    ReadonlyArray
    over
    Array
    for the purpose of intent, it’s often been a pain given that arrays have a nicer syntax. Specifically,
    number[]
    is a shorthand version of
    Array<number>
    , just as
    Date[]
    is a shorthand for
    Array<Date>
    .

    TypeScript 3.4 introduces a new syntax for

    ReadonlyArray
    using a new
    readonly
    modifier for array types.
    function foo(arr: readonly string[]) {
        arr.slice();        // okay
        arr.push("hello!"); // error!
    }

    readonly
    tuples

    TypeScript 3.4 also introduces new support for

    readonly
    tuples. We can prefix any tuple type with the
    readonly
    keyword to make it a
    readonly
    tuple, much like we now can with array shorthand syntax. As you might expect, unlike ordinary tuples whose slots could be written to,
    readonly
    tuples only permit reading from those positions.
    function foo(pair: readonly [string, string]) {
        console.log(pair[0]);   // okay
        pair[1] = "hello!";     // error
    }

    The same way that ordinary tuples are types that extend from

    Array
    – a tuple with elements of type
    T
    1
    ,
    T
    2
    , …
    T
    n
    extends from
    Array<
    T
    1
    |
    T
    2
    | …
    T
    n
    >
    readonly
    tuples are types that extend from
    ReadonlyArray
    . So a
    readonly
    tuple with elements
    T
    1
    ,
    T
    2
    , …
    T
    n
    extends from
    ReadonlyArray<
    T
    1
    |
    T
    2
    | …
    T
    n
    >
    .

    readonly
    mapped type modifiers and
    readonly
    arrays

    In earlier versions of TypeScript, we generalized mapped types to operate differently on array-like types. This meant that a mapped type like

    Boxify
    could work on arrays and tuples alike.
    interface Box<T> { value: T }
    
    type Boxify<T> = {
        [K in keyof T]: Box<T[K]>
    }
    
    // { a: Box<string>, b: Box<number> }
    type A = Boxify<{ a: string, b: number }>;
    
    // Array<Box<number>>
    type B = Boxify<number[]>;
    
    // [Box<string>, Box<number>]
    type C = Boxify<[string, boolean]>;

    Unfortunately, mapped types like the

    Readonly
    utility type were effectively no-ops on array and tuple types.
    // lib.d.ts
    type Readonly<T> = {
        readonly [K in keyof T]: T[K]
    }
    
    // How code acted *before* TypeScript 3.4
    
    // { readonly a: string, readonly b: number }
    type A = Readonly<{ a: string, b: number }>;
    
    // number[]
    type B = Readonly<number[]>;
    
    // [string, boolean]
    type C = Readonly<[string, boolean]>;

    In TypeScript 3.4, the

    readonly
    modifier in a mapped type will automatically convert array-like types to their corresponding
    readonly
    counterparts.
    // How code acts now *with* TypeScript 3.4
    
    // { readonly a: string, readonly b: number }
    type A = Readonly<{ a: string, b: number }>;
    
    // readonly number[]
    type B = Readonly<number[]>;
    
    // readonly [string, boolean]
    type C = Readonly<[string, boolean]>;

    Similarly, you could write a utility type like

    Writable
    mapped type that strips away
    readonly
    -ness, and that would convert
    readonly
    array containers back to their mutable equivalents.
    type Writable<T> = {
        -readonly [K in keyof T]: T[K]
    }
    
    // { a: string, b: number }
    type A = Writable<{
        readonly a: string;
        readonly b: number
    }>;
    
    // number[]
    type B = Writable<readonly number[]>;
    
    // [string, boolean]
    type C = Writable<readonly [string, boolean]>;

    Caveats

    Despite its appearance, the

    readonly
    type modifier can only be used for syntax on array types and tuple types. It is not a general-purpose type operator.
    let err1: readonly Set<number>; // error!
    let err2: readonly Array<boolean>; // error!
    
    let okay: readonly boolean[]; // works fine

    const
    assertions

    When declaring a mutable variable or property, TypeScript often widens values to make sure that we can assign things later on without writing an explicit type.

    let x = "hello";
    
    // hurray! we can assign to 'x' later on!
    x = "world";

    Technically, every literal value has a literal type. Above, the type

    "hello"
    got widened to the type
    string
    before inferring a type for
    x
    .

    One alternative view might be to say that

    x
    has the original literal type
    "hello"
    and that we can’t assign
    "world"
    later on like so:
    let x: "hello" = "hello";
    
    // error!
    x = "world";

    In this case, that seems extreme, but it can be useful in other situations. For example, TypeScripters often create objects that are meant to be used in discriminated unions.

    type Shape =
        | { kind: "circle", radius: number }
        | { kind: "square", sideLength: number }
    
    function getShapes(): readonly Shape[] {
        let result = [
            { kind: "circle", radius: 100, },
            { kind: "square", sideLength: 50, },
        ];
        
        // Some terrible error message because TypeScript inferred
        // 'kind' to have the type 'string' instead of
        // either '"circle"' or '"square"'.
        return result;
    }

    Mutability is one of the best heuristics of intent which TypeScript can use to determine when to widen (rather than analyzing our entire program).

    Unfortunately, as we saw in the last example, in JavaScript properties are mutable by default. This means that the language will often widen types undesirably, requiring explicit types in certain places.

    function getShapes(): readonly Shape[] {
        // This explicit annotation gives a hint
        // to avoid widening in the first place.
        let result: readonly Shape[] = [
            { kind: "circle", radius: 100, },
            { kind: "square", sideLength: 50, },
        ];
        
        return result;
    }

    Up to a certain point this is okay, but as our data structures get more and more complex, this becomes cumbersome.

    To solve this, TypeScript 3.4 introduces a new construct for literal values called

    const
    assertions. Its syntax is a type assertion with

    const
    in place of the type name (e.g.
    123 as const
    ). When we construct new literal expressions with
    const
    assertions, we can signal to the language that
    • no literal types in that expression should be widened (e.g. no going from
      "hello"
      to
      string
      )
    • object literals get
      readonly
      properties
    • array literals become
      readonly
      tuples
    // Type '10'
    let x = 10 as const;
    
    // Type 'readonly [10, 20]'
    let y = [10, 20] as const;
    
    // Type '{ readonly text: "hello" }'
    let z = { text: "hello" } as const;

    Outside of

    .tsx
    files, the angle bracket assertion syntax can also be used.
    // Type '10'
    let x = <const>10;
    
    // Type 'readonly [10, 20]'
    let y = <const>[10, 20];
    
    // Type '{ readonly text: "hello" }'
    let z = <const>{ text: "hello" };

    This feature often means that types that would otherwise be used just to hint immutability to the compiler can often be omitted.

    // Works with no types referenced or declared.
    // We only needed a single const assertion.
    function getShapes() {
        let result = [
            { kind: "circle", radius: 100, },
            { kind: "square", sideLength: 50, },
        ] as const;
        
        return result;
    }
    
    for (const shape of getShapes()) {
        // Narrows perfectly!
        if (shape.kind === "circle") {
            console.log("Circle radius", shape.radius);
        }
        else {
            console.log("Square side length", shape.sideLength);
        }
    }

    Notice the above needed no type annotations. The

    const
    assertion allowed TypeScript to take the most specific type of the expression.

    Caveats

    One thing to note is that

    const
    assertions can only be applied immediately on simple literal expressions.
    // Error!
    //   A 'const' assertion can only be applied to a string, number, boolean, array, or object literal.
    let a = (Math.random() < 0.5 ? 0 : 1) as const;
    
    // Works!
    let b = Math.random() < 0.5 ?
        0 as const :
        1 as const;

    Another thing to keep in mind is that

    const
    contexts don’t immediately convert an expression to be fully immutable.
    let arr = [1, 2, 3, 4];
    
    let foo = {
        name: "foo",
        contents: arr,
    };
    
    foo.name = "bar";   // error!
    foo.contents = [];  // error!
    
    foo.contents.push(5); // ...works!

    Type-checking for
    globalThis

    It can be surprisingly difficult to access or declare values in the global scope, perhaps because we’re writing our code in modules (whose local declarations don’t leak by default), or because we might have a local variable that shadows the name of a global value. In different environments, there are different ways to access what’s effectively the global scope –

    global
    in Node,
    window
    ,
    self
    , or
    frames
    in the browser, or
    this
    in certain locations outside of strict mode. None of this is obvious, and often leaves users feeling unsure of whether they’re writing correct code.

    TypeScript 3.4 introduces support for type-checking ECMAScript’s new

    globalThis
    – a global variable that, well, refers to the global scope. Unlike the above solutions,
    globalThis
    provides a standard way for accessing the global scope which can be used across different environments.
    // in a global file:
    
    let abc = 100;
    
    // Refers to 'xyz' from above.
    globalThis.abc = 200;

    globalThis
    is also able to reflect whether or not a global variable was declared as a
    const
    by treating it as a
    readonly
    property when accessed.
    const answer = 42;
    
    globalThis.answer = 333333; // error!

    It’s important to note that TypeScript doesn’t transform references to

    globalThis
    when compiling to older versions of ECMAScript. As such, unless you’re targeting evergreen browsers (which already support
    globalThis
    ), you may want to use an appropriate polyfill instead.

    Convert to named parameters

    Sometimes, parameter lists start getting unwieldy.

    function updateOptions(
        hue?: number,
        saturation?: number,
        brightness?: number,
        positionX?: number,
        positionY?: number
        positionZ?: number) {
        
        // ....
    }

    In the above example, it’s way too easy for a caller to mix up the order of arguments given. A common JavaScript pattern is to instead use an “options object”, so that each option is explicitly named and order doesn’t ever matter. This emulates a feature that other languages have called “named parameters”.

    interface Options {
        hue?: number,
        saturation?: number,
        brightness?: number,
        positionX?: number,
        positionY?: number
        positionZ?: number
    }
    
    function updateOptions(options: Options = {}) {
        
        // ....
    }

    The TypeScript team doesn’t just work on a compiler – we also provide the functionality that editors use for rich features like completions, go to definition, and refactorings. In TypeScript 3.4, our intern Gabriela Britto has implemented a new refactoring to convert existing functions to use this “named parameters” pattern.

    A refactoring being applied to a function to make it take named parameters.

    While we may change the name of the feature by our final 3.4 release and we believe there may be room for some of the ergonomics, we would love for you to try the feature out and give us your feedback.

    Breaking changes

    Top-level
    this
    is now typed

    The type of top-level

    this
    is now typed as
    typeof globalThis
    instead of
    any
    . As a consequence, you may receive errors for accessing unknown values on
    this
    under
    noImplicitAny
    .
    // previously okay in noImplicitAny, now an error
    this.whargarbl = 10;

    Note that code compiled under

    noImplicitThis
    will not experience any changes here.

    Propagated generic type arguments

    In certain cases, TypeScript 3.4’s improved inference might produce functions that are generic, rather than ones that take and return their constraints (usually

    {}
    ).
    declare function compose<T, U, V>(f: (arg: T) => U, g: (arg: U) => V): (arg: T) => V;
    
    function list<T>(x: T) { return [x]; }
    function box<T>(value: T) { return { value }; }
    
    let f = compose(list, box);
    let x = f(100)
    
    // In TypeScript 3.4, 'x.value' has the type
    //
    //   number[]
    //
    // but it previously had the type
    //
    //   {}[]
    //
    // So it's now an error to push in a string.
    x.value.push("hello");

    An explicit type annotation on

    x
    can get rid of the error.

    What’s next?

    TypeScript 3.4 is our first release that has had an iteration plan outlining our plans for this release, which is meant to align with our 6-month roadmap. You can keep an eye on both of those, and on our rolling feature roadmap page for any upcoming work.

    Right now we’re looking forward to hearing about your experience with the RC, so give it a shot now and let us know your thoughts!

    – Daniel Rosenwasser and the TypeScript team

    The post Announcing TypeScript 3.4 RC appeared first on TypeScript.

    ONNX Runtime integration with NVIDIA TensorRT in preview

    $
    0
    0

    Today we are excited to open source the preview of the NVIDIA TensorRT execution provider in ONNX Runtime. With this release, we are taking another step towards open and interoperable AI by enabling developers to easily leverage industry-leading GPU acceleration regardless of their choice of framework. Developers can now tap into the power of TensorRT through ONNX Runtime to accelerate inferencing of ONNX models, which can be exported or converted from PyTorch, TensorFlow, and many other popular frameworks.

    Microsoft and NVIDIA worked closely to integrate the TensorRT execution provider with ONNX Runtime and have validated support for all the ONNX Models in the model zoo. With the TensorRT execution provider, ONNX Runtime delivers better inferencing performance on the same hardware compared to generic GPU acceleration. We have seen up to 2X improved performance using the TensorRT execution provider on internal workloads from Bing MultiMedia services.

    How it works

    ONNX Runtime together with its TensorRT execution provider accelerates the inferencing of deep learning models by parsing the graph and allocating specific nodes for execution by the TensorRT stack in supported hardware. The TensorRT execution provider interfaces with the TensorRT libraries that are preinstalled in the platform to process the ONNX sub-graph and execute it on NVIDIA hardware. This enables developers to run ONNX models across different flavors of hardware and build applications with the flexibility to target different hardware configurations. This architecture abstracts out the details of the hardware specific libraries that are essential to optimizing the execution of deep neural networks.

    Infographic showing input data and output result using the ONNX model

    How to use the TensorRT execution provider

    ONNX Runtime together with the TensorRT execution provider supports the ONNX Spec v1.2 or higher, with version 9 of the Opset. TensorRT optimized models can be deployed to all N-series VMs powered by NVIDIA GPUs on Azure.

    To use TensorRT, you must first build ONNX Runtime with the TensorRT execution provider (use --use_tensorrt --tensorrt_home <path to location for TensorRT libraries in your local machine> flags in the build.sh tool). You can then take advantage of TensorRT by initiating the inference session through the ONNX Runtime APIs. ONNX Runtime will automatically prioritize the appropriate sub-graphs for execution by TensorRT to maximize performance.

    InferenceSession session_object{so};
    session_object.RegisterExecutionProvider(std::make_unique<::onnxruntime::TensorrtExecutionProvider>());
    status = session_object.Load(model_file_name);​

    Detailed instructions are available on GitHub. In addition, a collection of standard tests are available through the onnx_test_runner utility in the repo to help verify the ONNX Runtime build with TensorRT execution provider.

    What is ONNX and ONNX Runtime

    ONNX is an open format for deep learning and traditional machine learning models that Microsoft co-developed with Facebook and AWS. ONNX allows models to be represented in a common format that can be executed across different hardware platforms using ONNX Runtime. This gives developers the freedom to choose the right framework for their task, as well as the confidence to run their models efficiently on a variety of platforms with the hardware of their choice.

    ONNX Runtime is the first publicly available inference engine with full support for ONNX 1.2 and higher including the ONNX-ML profile. ONNX Runtime is lightweight and modular with an extensible architecture that allows hardware accelerators such as TensorRT to plug in as “execution providers.” These execution providers unlock low latency and high efficiency neural network computations. Today, ONNX Runtime powers core scenarios that serve billions of users in Bing, Office, and more.

    Another step towards open and interoperable AI

    The preview of the TensorRT execution provider for ONNX Runtime marks another milestone in our venture to create an open and interoperable ecosystem for AI. We hope this makes it easier to drive AI innovation in a world with ever-increasing latency requirements for production models. We are continuously evolving and improving ONNX Runtime, and look forward to your feedback and contributions!  

    To learn more about using ONNX for accelerated inferencing on the cloud and edge, check out the ONNX session at NVIDIA GTC. Have feedback or questions about ONNX Runtime? File an issue on GitHub, and follow us on Twitter

    Azure.Source – Volume 74

    $
    0
    0

    Now in preview

    AzCopy support in Azure Storage Explorer now available in public preview

    AzCopy in Azure Storage Explorer is now in public preview. AzCopy is a popular command-line utility that provides performant data transfer into and out of a storage account. AzCopy enhances the performance and reliability through a scalable design, where concurrency is scaled up according to the number of machine’s logical cores. Azure Storage Explorer provides the UI interface for various storage tasks, and now it supports using AzCopy as a transfer engine to provide the highest throughput for transferring your files for Azure Storage. This capability is available today as a preview in Azure Storage Explorer.

    Screenshot of Azure Storage Explorer showing how to enable AzCopy in Azure Storage Explorer

    Now available for preview: Workload importance for Azure SQL Data Warehouse

    Announcing the preview of Workload Importance for Azure SQL Data Warehouse on the Gen2 platform. Manage resources more efficiently with Azure SQL Data Warehouse - a fast, flexible and secure analytics platform for enterprises of all sizes. Workload importance gives data engineers the ability to use importance to classify requests. Requests with higher importance are guaranteed quicker access to resources which helps meet SLAs.

    Also available in preview

    News and updates

    Achieve more with Microsoft Game Stack

    Announcing Microsoft Game Stack, a new initiative in which we commit to bringing together Microsoft tools and services that empower game developers to achieve more. Game Stack brings together all of our game-development platforms, tools, and services—such as Azure, PlayFab, DirectX, Visual Studio, Xbox Live, App Center, and Havok—into a robust ecosystem that any game developer can use. The goal of Game Stack is to help you easily discover the tools and services you need to create and operate your game.

    Illustration showing logos for each component of the Microsoft Game Stack

    Azure Databricks – VNet injection, DevOps Version Control and Delta availability

    Azure Databricks provides a fast, easy, and collaborative Apache® Spark™-based analytics platform to accelerate and simplify the process of building big data and AI solutions that drive the business forward, all backed by industry-leading SLAs. With Azure Databricks, you can set up your Spark environment in minutes and auto-scale quickly and easily. You can also apply your existing skills and collaborate on shared projects in an interactive workspace with support for Python, Scala, R, and SQL, as well as data science frameworks and libraries like TensorFlow and PyTorch.

    Hardware innovation for data growth challenges at cloud-scale

    Last week we at Open Compute Project (OCP) Global Summit 2019 Microsoft announced Project Zipline: a cutting-edge compression algorithm and optimized hardware implementation for the types of data we see in our cloud storage workloads. By engineering innovation at the systems level, we've been able to simultaneously achieve higher compression ratios, higher throughput, and lower latency than the other algorithms that are currently available. We are open sourcing Project Zipline compression algorithms, hardware design specifications, and Verilog source code for register transfer language (RTL) with initial content available today and more coming soon.

    Graph showing Zipline compression for Application Services (92%), IoT Text Files (95%), and System Logs (96%)

    Azure Data Box family now enables import to Managed Disks

    Announcing support for managed disks is now available across the Azure Data Box family of devices, which includes Data Box, Data Box Disk, and Data Box Heavy. The Azure Data Box offline family lets you transfer hundreds of terabytes of data to Microsoft Azure in a quick, inexpensive, and reliable manner. With managed disks support on Data Box, you can now move your on-premises virtual hard disks (VHDs) as managed disks in Azure with one simple step.

    Simplify disaster recovery with Managed Disks for VMware and physical servers

    Azure Site Recovery (ASR) now supports disaster recovery of VMware virtual machines and physical servers by directly replicating to Managed Disks. To enable replication for a machine, you no longer need to create storage accounts because you can now write replication data directly to a type of Managed Disk. This change will not impact the machines which are already in a protected state; however, all new protections will now have this capability available on the Azure portal.

    Simplifying your environment setup while meeting compliance needs with built-in Azure Blueprints

    Announcing the release of our first Azure Blueprint built specifically for a compliance standard, the ISO 27001 Shared Services blueprint sample, which maps a set of foundational Azure infrastructure such as virtual networks and policies, to specific ISO controls. Azure Blueprints is a free service that helps customers deploy and update cloud environments in a repeatable manner using composable artifacts such as policies, deployment templates, and role-based access controls. This service is built to help customers set up governed Azure environments and can scale to support production implementations for large-scale migrations. The ISO 27001 Shared Services Blueprint is already available to your Azure tenant.

    Screenshot of Create blueprint blade in the Azure portal

    Microsoft Azure portal March 2019 update

    This month’s updates include an improved “All services” view, Virtual Network Gateway overview updates, an improved DNS Zone and Load Balancer creation experience, Management Group integration into Activity Log, redesigned overview screens for certain services within Azure DB, an improved creation experience for Azure SQL Database, multiple changes to the Security Center, and more updates to Intune. Sign in to the Azure portal now and see for yourself everything that’s new.

    Approve Azure Pipelines deployments from Slack

    Approve Azure Pipelines deployments from Slack is now available. We’re making it even easier for you, with a tighter integration that lets you be more productive – even when you’re on the go. Approving release deployments in Azure Pipelines is just a click away.

    Azure Service Fabric 6.4 Refresh Release

    Updates to the .NET SDK, Java SDK and Service Fabric runtimes are rolling out through Web Platform Installer, NuGet packages and Maven repositories in all regions.

    Azure Security Center updates

    Additional news and updates

    Technical content

    Run your code and leave build to us

    Getting your app to the cloud is more work than you may anticipate. We're happy to share that there is a faster way. When you need to focus on app code, you can delegate build and deployment to Azure with App Service web apps and we'll take care of building and running your code the way you expect.

    Stay informed about service issues with Azure Service Health

    Azure Service Health helps you stay informed and take action when Azure service issues like incidents and planned maintenance affect you by providing a personalized health dashboard, customizable alerts, and expert guidance. Read how you can use Azure Service Health’s personalized dashboard to stay informed about issues that could affect you now or in the future.

    Screenshot of Service Health - Service issues blade in Azure portal

    Azure Stack IaaS – part four

    Deploying your IaaS VM-based applications to Azure and Azure Stack requires a comprehensive evaluation of your BC/DR strategy. “Business as usual” is not enough in the context of cloud. For Azure Stack, you need to evaluate the resiliency, availability, and recoverability requirements of the applications separate from the protection schemes for the underlying infrastructure. Learn the concepts and best practices to protect your IaaS virtual machines (VMs) on Azure Stack.

    Create a transit VNet using VNet peering

    Azure Virtual Network (VNet) is the fundamental building block for any customer network. VNet lets you create your own private space in Azure, or as I call it your own network bubble. VNets are crucial to your cloud network as they offer isolation, segmentation, and other key benefits. VNet peering with gateway transit works across classic Azure Service Management (ASM) and Azure Resource Manager (ARM) deployment models and works across subscriptions and subscriptions belonging to different Azure Active Directory tenants. Gateway transit has been available since September 2016 for VNet peering in all regions and will be available for global VNet peering shortly.

    Commit, push, deploy — Git in the Microsoft Azure Cloud

    Git is a popular Version Control option -- and, instead of asking you to learn something new, this article serves as an introduction to help Git users get familiar with cloud (and Azure), including an end-to-end walkthrough. Chris covers how to download, run, and configure a sample app using Git, and from there, dives into how to deploy, manage, update, and redeploy that app inside Azure.

    Azure DevOps Slack Integration

    In this quick how-to video, Neil shows how easy it is to setup the Azure DevOps and Slack integration for detailed real-time notifications about your build and release notifications. You'll see how to customize what you see, and click through to Azure DevOps to dig into build failures, approve requests, and validate successful deploys.

    Thumbnail from Azure DevOpsSlack Integration video on YouTube

    Azure Functions With F#

    This quick post from Aaron Powell walks through how to use VS Code and F# to create Azure Functions v2.

    How to query Azure resources using the Azure CLI

    The Azure CLI can be used to not only create, configure, and delete resources from Azure -- but also to query data from Azure. Querying Azure for resource properties is handy when you're writing scripts using the Azure CLI - for instance, when you want to get an Azure Virtual Machine or Container Instance IP address to perform some action on that resource. This post is a quick exercise that demonstrates several concepts, so you're ready to query a single resource property and store the value of that property in a variable. We'll use Azure Container Instances (ACI), but you don't need to have experience with ACI to complete the steps in this article - the concepts transfer to any Azure resource.

    7 things you should know when getting started with Serverless APIs

    In this article—based on a talk Simona Cotin gave at Build— she walks you through an existing application with an Express back-end and porting it to a Serverless back-end by changing a single line in our front-end code. By the end of the article, you will have built an API that will scale instantly as more and more users come in and our workload increases.

    Additional technical content

    Azure shows

    Episode 270 - Hammer and Nail | The Azure Podcast

    Cale Teeter and Sujit D'Mello discuss using a solutions-based approach when selecting Azure services instead of getting caught in the hype of new services.

    Heat Maps and Image Overlays in Azure Maps | Internet of Things Show

    Heat maps are used to represent the density of data using a range of colors. They are often used to show the data "hot spots" on a map and are great to help understand data. The heat map layer also supports weighted data points to help bring the most relevant information to the surface. Learn about the heat map and image layer visualizations in side of Azure Maps.

    What’s New for Visual Studio 2019 Integrations with Azure Boards | The DevOps Lab

    In this episode, you see a quick walk through of a new experience in Visual Studio 2019; showing how a developer can quickly find the work they need and associate it to their pending changes.

    Azure Pipelines multi-cloud support and integration with DevOps tools | Azure Friday

    Learn to integrate Azure Pipelines with various 3rd-party tools to achieve full DevOps cycle with Multi-cloud support. You can continue to use you existing tools and get Azure Pipelines benefits: application release orchestration, deployment, approvals, and full traceability all the way to the code or issue.

    Five Ways You Can Infuse AI into Your Applications | Five Things

    Leben Things! In case you don't speak Elvish, that roughly translates to "Five Things". This week I sit down with Noelle LaCharite from the Microsoft Cognitive Services team to learn how machines can translate language, perform search on unstructured data, converse like humans and more. Even better, you can use this stuff in your applications right away; no degree in multi-dimensional calculus required. This is five ways that you can infuse AI into your applications today.

    Getting Started with Infrastructure as Code (IaC) | The Open Source Show

    Armon Dadgar, HashiCorp CTO and co-founder, and Aaron Schlesinger walk us through the core concepts of Infrastructure as Code (IaC) and how it goes beyond what people typically think when they hear "Infrastructure." They break down the what, when, how, and why IaC makes developers' lives easier, whether you're running a simple application or have a complex, multi-node system. You'll learn how you can use HashiCorp Terraform to get up and running with IaC, going from nothing to a complete carbon copy of your production environment at the click of button (read: you focus on building, testing, and deploying, not spinning up test environments and hoping they're close to what's in production).

    Quick tour of Azure DevOps projects using Node.js and AKS: Part 2 | Azure Tips and Tricks

    Learn what Azure DevOps projects are and how to use them with Node.js and Azure Kubernetes Service. In part 2, you’ll get to explore the rest of the resources that Azure DevOps projects has to offer.

    Thumbnail from Quick tour of Azure DevOps projects using Node.jsand AKS: Part 2 video on YouTube

    How to create a storage account and upload a blob | Azure Portal Series

    The Azure Portal enables you to create and manage storage accounts and upload a blob. In this video of the Azure Portal “How To” Series, learn how to easily create a storage account, upload a blob, and manage the storage account within Storage Explorer (preview).

    Thumbnail from How to create a storage account and upload a blob on YouTube

    Greg Leonardo on Deploying the Azure Way | Azure DevOps Podcast

    Greg Leonardo is a Cloud Architect at Campus Management Corp. and Webonology. In this episode of the Azure Podcast, he discusses some of the topics from his book, Hands-On Cloud Solutions with Azure: architecting, developing, and deploying the Azure way. He also talks about working with infrastructure as code, provisioning and watching environments, and more about what developers targeting Azure need to know.

    Episode 2 - WTF Azure (How Do I Get Started?) | AzureABILITY

    AzureABILITY host Louis Berman discusses how to get started in Azure with his fellow Cloud Solutions Architect, Srini Ambati. Listen in as Louis and Srini give you a leg up into the cloud.

    Read the transcript

    Events

    Microsoft Create - A Global Startup Event Series

    Create is for startup founders, technical co-founders, and early or first engineers, with potentially a small number of business-focused attendees. The event is ideally for early stage startups looking to make technical decisions about platform and technology stack. Our agenda focuses heavily on Azure technologies and highlights Microsoft for Startups offerings and the ScaleUp program. The tour is free to attendees.

    Join Microsoft at the NVIDIA GPU Technology Conference

    The world of computing goes deep and wide on working on issues related to our environment, economy, energy, and public health systems. These needs require modern, advanced solutions that were traditionally limited to a few organizations, are hard to scale, and take a long time to deliver. Microsoft Azure delivers High Performance Computing (HPC) capability and tools to power solutions that address these challenges integrated into a global-scale cloud platform. Microsoft’s partnership with NVIDIA makes access to NVIDIA GPUs easier than ever. This week’s NVIDIA’s GPU Technology Conference teaches Azure customers to combine the flexibility and elasticity of the cloud with the capability of NVIDIA GPUs.

    Cloud Commercial Communities webinar and podcast newsletter–March 2019

    Each month the Cloud Commercial Communities team focuses on core programs, updates, trends, and technologies that Microsoft partners and customers need to know to increase success using Azure and Dynamics. Make sure you catch a live webinar and participate in live QA.

    Decorative graphic for Cloud Commercial Communities webinar and podcast newsletter–March 2019

    IoT in Action: A more sustainable future for farming

    The future of food security and feeding an expanding global population depends upon our ability to increase food production globally—an estimated 70 percent by the year 2050, according to the Food and Agriculture Organization of the United Nations. But challenges ranging from climate change, soil quality, pest control, and shrinking land availability, not to mention water resource constraints, must be addressed. We believe that the Internet of Things (IoT) technology and data-driven agriculture is one answer.

    IoT in Action: Thriving partner ecosystem key to transformation

    The Internet of Things (IoT) is an ongoing journey. Digital transformation requires that solutions be connected so that the data can be collected and analyzed more effectively across systems to drive exponential improvements in operations, profitability, and customer and employee loyalty. Through our partner-plus-platform-approach, we have committed $5 billion in IoT-focused investments to grow and support our partner ecosystem–specifically through unrelenting R&D innovation in critical areas, like security, new development tools and intelligent services, artificial intelligence, and emerging technologies.

    Customers, partners, and industries

    Spinning up cloud-scale analytics is even more compelling with Talend and Microsoft

    Stich Data Loader is Talend's recent addition to its portfolio for small- and mid-market customers. With Stitch Data Loader, customers can load 5 million rows/month into Azure SQL Data Warehouse for free or scale up to an unlimited number of rows with a subscription. All across the industry, there is a rapid shift to the cloud. Using a fast, flexible, and secure cloud data warehouse is an important first step in that journey. With Microsoft Azure SQL Data Warehouse and Stitch Data Loader companies can get started faster than ever.

    Economist study: OEMs create new revenue streams with next-gen supply chains

    Original equipment manufacturers (OEMs) make the wheels go round for the business world. Successful OEMs are always on the lookout for opportunities to drive down costs and differentiate their brands; and the rise of IoT offers a golden opportunity to fundamentally transform the supply chain. The Economist Intelligence Unit surveyed 250 senior executives at OEMs in North America, Europe, and Asia-Pacific to gain insights from those customers at the center of the supply chain.

    Photo of a man standing in a lab with several large touch-enabled displays

    Azure Marketplace new offers – Volume 33

    The Azure Marketplace is the premier destination for all your software needs – certified and optimized to run on Azure. Find, try, purchase, and provision applications & services from hundreds of leading software providers. You can also connect with Gold and Silver Microsoft Cloud Competency partners to help your adoption of Azure. In the first half of February we published 50 new offers.

    Accelerating enterprise digital transformation through DevOps

    IT organizations are under more pressure than ever to do more with less, they are expected to drive competitive advantage and innovation with higher quality while managing smaller teams. Organizations must now adapt by adopting rapid and strategic transformation while simultaneously working diligently to keep the lights on, and all with the important goal of reducing costs. To address these challenges, Sirrus7, GitHub, and HashiCorp have joined together to create the DevOps Acceleration Engine.

    Maximize existing vision systems in quality assurance with Cognitive AI

    Quality assurance matters to manufacturers. The reputation and bottom line of a company can be adversely affected if defective products are released. If a defect is not detected, and the flawed product is not removed early in the production process, the damage can run in the hundreds of dollars per unit. To mitigate this, many manufacturers install cameras to monitor their products as they move along the production line. Mariner, with its Spyglass solution, uses AI from Azure to achieve visibility over the entire line, and to prevent product defects before they become a problem.


    Azure This Week - 15 March 2019 | A Cloud Guru - Azure This Week

    This time on Azure This Week, Lars covers the official release of Azure DevOps Server 2019, the public preview of Azure Premium Blob Storage and he looks at some new features in Azure Firewall.

    Thumbnail from Azure This Week - 15 March 2019 on YouTube

    Azure Backup for SQL Server in Azure Virtual Machines now generally available!

    $
    0
    0

    How do you back up your SQL Servers today? You could be using backup software that require you to manage backup servers, agents, and storage, or you could be writing elaborate custom scripts which need you to manage the backups on each server individually. With the modernization of IT infrastructure and the world rapidly moving to the cloud, do you want to continue using the legacy backup methods that are tedious, infrastructure-heavy, and difficult to scale? Azure Backup for SQL Server Virtual Machines (VMs) is the modern way of doing backup in cloud, and we are excited to announce that it is now generally available! It is an enterprise scale, zero-infrastructure solution that eliminates the need to deploy and manage backup infrastructure while providing a simple and consistent experience to centrally manage and monitor the backups on standalone SQL instances and Always On Availability Groups.

    Azure Backup for SQL Server running in Azure Virtual Machines

     

    Built into Azure, the solution combines the core cloud promises of simplicity, scalability, security and cost effectiveness with inherent SQL backup capabilities that are leveraged by using native APIs, to yield high fidelity backups and restores. The key value propositions of this solution are:

    1. 15-minute Recovery Point Objective (RPO): Working with uber critical data and have a low RPO? Schedule a log backup to happen every 15 minutes.
    2. One-click, point-in-time restores: Tired of elaborate manual restore procedures? Restore databases to a point in time up to a second in one click, without having to manually apply a chain of logs over differential and full backups.
    3. Long-term retention: Rigorous compliance and audit needs? Retain your backups for years, based on the retention duration, beyond which the recovery points will be pruned automatically by the built-in lifecycle management capability.
    4. Protection for encrypted databases: Concerned about security of your data and backups? Back-up SQL encrypted databases and secure backups with built-in encryption at rest while controlling backup and restore operations with Role-Based Access Control.
    5. Auto-protection: Dealing with a dynamic environment where new databases get added frequently? Auto-protect your server to automatically detect and protect the newly added databases.
    6. Central management and monitoring: Losing too much time managing and monitoring backups for each server in isolation? Scale smartly by creating centrally managed backup policies that can be applied across databases. Monitor jobs and get alerts and emails across servers and even vaults from a single pane of glass.
    7. Cost effective: No infrastructure and no overhead of managing the scale, seems like value for the money already? Enjoy reduced total cost of ownership and flexible pay-as-you-go option.

    Get started

    Azure portal with ‘Restore’ blade open inside the vault view that shows the graphical view of continuous log backups.

    Customer feedback

    We have been in preview for a few months now, and have seen an overwhelming response from our customers:

    “Our experience with Azure SQL Server Backup has been fantastic. It’s a solution you can put in place in a couple of minutes and not have to worry about it. To restore DBs, we don’t have to deal with rolling logs and only have to choose a date and time. It gives us great peace of mind to know the data is safely stored in the Recovery Services Vaults with our other protected items.”

    - Steven Hayes, Principal Architect, Acuity Brands Lighting, Inc

    “We have been using Azure Backup for SQL Server for the past few months and have found it simple to use and easy to set up. The backup and restore operations are performant and reliable as well as easy to monitor. We plan to continue using it in the future."

    - Celica E. Candido, Cloud Operations Analyst, Willis Towers Watson

    Additional resources

    Azure Container Registry virtual network and Firewall rules preview support

    $
    0
    0

    While Azure Container Registry (ACR) supports user and headless-service account authentication, customers have expressed their requirements for limiting public endpoint access. Customers can now limit registry access within an Azure Virtual Network (VNet), as well as whitelist IP addresses and ranges for on-premises services.

    VNet and Firewall rules are supported with virtual machines (VM) and Azure Kubernetes Services (AKS).

    Choosing between private and PaaS registries

    As customers move into production, their security teams have a checklist they apply to production workloads, one of which is limiting all public endpoints. Without VNet support, customers had to choose between standalone products, or OSS projects they could run and manage themselves. This puts a larger burden on the customers to manage the storage, security, scalability, and reliability a production registry requires.

    With VNet and Firewall rules, customers can achieve their security requirements, while benefiting from integrated security, secured at rest, geo-redundant, and geo-replicated PaaS Container Registry. Thus, freeing up their resources to focus on the unique business problems they face.

    Azure Container Registry PaaS, enabling registry products

    The newest VNet and Firewall rule capabilities of ACR are just the latest set of capabilities in container lifecycle management. ACR provides core primitives that other registry or CI/CD products may build upon. Our goal with ACR isn’t to compete with our partners, rather enable them with core cloud capabilities, allow them to focus on the higher level, unique capabilities each offer.

    Getting started

    Using the Azure CLI, or the Azure portal, customers can follow our documentation for configuring VNet and Firewall rules.

    VNet and Firewall rules preview pricing

    During preview, VNet and Firewall rules will be included in the Azure Container Registry’s Premium Tier.

    Preview and general availability dates

    As of March 18, 2019, VNet and Firewall rules are available for public preview in all 25 public cloud regions. General availability (GA) will be based on a curve of usage and feedback.

    More information

    Power IoT and time-series workloads with TimescaleDB for Azure Database for PostgreSQL

    $
    0
    0

    We’re excited to announce a partnership with Timescale that introduces support for TimescaleDB on Azure Database for PostgreSQL for customers building IoT and time-series workloads. TimescaleDB has a proven track record of being deployed in production in a variety of industries including oil & gas, financial services, and manufacturing. The partnership reinforces our commitment to supporting the open-source community to provide our users with the most innovative technologies PostgreSQL has to offer.

    TimescaleDB allows you to scale for fast ingest and complex queries while natively supporting full SQL. It leverages PostgreSQL as an essential building block, which means that users get the familiarity and reliability of PostgreSQL, along with the scalability and performance of TimescaleDB. Enabling TimescaleDB on your new or existing Azure Database for PostgreSQL server will eliminate the need to run two databases to collect relational and time-series data.

    How to get started

    If you don’t already have an Azure Database for PostgreSQL server, you can create one with the Azure CLI command az postgres up. Next, run the following command to add TimescaleDB to your Postgres libraries:

    az postgres server configuration set --resource-group mygroup –server-name myserver --name shared_preload_libraries --value timescaledb

    Restart the server to load the new library. Then, connect to your Postgres database and run:

    CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;

    You can now create a TimescaleDB hypertable from scratch or migrate your existing time-series data.

    Postgres with TimescaleDB as a foundation for IoT applications

    PostgreSQL is enabling many IoT scenarios. To learn more, refer to the blog post, “Creating IoT applications with Azure Database for PostgreSQL.” With TimescaleDB, this experience is even better. IoT organizations can now also leverage the insights hidden in machine generated data to build new features, automate processes, and drive efficiency.

    Challenge Solution
    IoT devices generate a lot of data which needs to be stored efficiently. TimescaleDB automatically partitions data into chunks to scale for these types of workloads.
    IoT data is complex (i.e. marrying device metadata, geospatial data, and time-series data). TimescaleDB combines relational capabilities with time-series specific functions and is compatible with other PostgreSQL extensions including PostGIS.
    IoT data needs to be accessed by multiple users (i.e. internal users for analytics or external users to expose data in real-time). TimescaleDB speaks full SQL, a query language that is familiar across entire organizations.
    IoT data requires diverse, customizable ingest pipelines that require a database with a broad ecosystem. TimescaleDB inherits PostgreSQL’s entire ecosystem of tools and extensions.
    IoT applications are made up of data at their core, and need to be stored in a reliable database. TimescaleDB inherits PostgreSQL’s 20+ years of reliability and stability.

    TimescaleDB offers valuable performance characteristics on top of PostgreSQL. For IoT use cases that highly leverage time-series data, TimescaleDB implements automatic chunk partitioning to support high insert rates. Below is a comparison on Azure PostgreSQL with and without TimescaleDB and observed degradation in insert performance over time. You can imagine that with IoT use cases with large amounts of time-series data, using TimescaleDB can provide significant value for IoT applications that need both relational features and scalability.

    A comparison on Azure PostgreSQL with and without TimescaleDB and observed degradation in insert performance over time.

    Note: General Purpose Compute Gen 5 with 4 vCores, 20GB RAM with Premium Storage

    Although IoT is an obvious use case for a time-series database, time-series data actually exists everywhere. Time-series data is essentially collected over time with an associated timestamp. With TimescaleDB, developers can continue to use PostgreSQL, while leveraging TimescaleDB to scale for time-series workloads.

    Next steps

    As always, we encourage you to leave feedback below. You can also engage with the Azure Database for PostgreSQL through our feedback page and our forums if you have questions or feature suggestions.

    Azure Data Studio: An Open Source GUI Editor for Postgres

    $
    0
    0

    When you are working with a database, or any other kind of software, your experience is enhanced or hindered by the tools you use to interact with it. PostgreSQL has a command line tool, psql, and it’s pretty powerful, but some people much prefer a graphical editor. Even if you typically use command line, you may want to go visual sometimes. At Microsoft we've spent many years building experiences to enhance developers' day-to-day productivity. Having choices is important. It allows you to go with the tool that works for you.

    Today we're excited to announce preview support for PostgreSQL in Azure Data Studio. Azure Data Studio is a cross-platform modern editor focused on data development. It's available for Linux, MacOS, and Windows. Plus, Azure Data Studio comes with an integrated terminal so you're never far away from psql.

    We're also introducing a corresponding preview PostgreSQL extension in Visual Studio Code (VS Code). Both Azure Data Studio and VS Code are open source and extensible - two things that PostgreSQL itself is based on.

    Azure Data Studio inherits a lot of VS Code functionality. It also supports most of VS Code's extensions like Python, R, and Kubernetes support. If your primary use case is data, choose Azure Data Studio. You can manage multiple database connections, explore database object hierarchy, set up dashboards, and more.

    On the other hand, if you're closer to application development than you are to database administration, then go for our PostgreSQL extension in VS Code. Actually, you don't have to choose - use both, switching according to what works best for you at the time.

    Connect to Postgres

    Curious about what’s included? Let’s take a deeper look at the development experience for PostgreSQL in Azure Data Studio. You can connect to your Postgres server or establish a connection directly to a database. The Postgres server can be hosted on-premises, in a virtual machine (VM), or from the managed service of any cloud provider.

    Connect to Postgres in Azure

    Organize your servers

    Often you have multiple Postgres servers you’re working with. Perhaps there’s one production server, a corresponding stage server, and maybe multiple dev/test servers. Knowing which is which is key, especially being able to clearly identify your production server. In Azure Data Studio you can use server groups to categorize your servers. You can highlight your production server group in red to make it visually distinct from the others.

    Organize your servers in Azure

    Track down database objects

    Your Postgres server evolves as you add new functionality. It’s helpful to be able to clearly see what columns, indexes, triggers, and functions have been created for each database and table. This is especially true when you’re not the only person working on that Postgres instance. Azure Data Studio provides convenient hierarchical navigation in the sidebar. With it you can easily explore and keep track of your server's databases, tables, views, and other objects.

    Track down database objects in Azure

    Write queries efficiently

    As you look through the new database objects your teammates have created, it’s helpful to go beyond the name of the object to the DDL that composes it. Even if you’re the only person working on your Postgres instance, there may be objects you created a while back that you want to look up. Checking the DDL is a useful double-check to confirm that an object is doing what you expect.

    Azure Data Studio provides “Peek Definition” and “Go to Definition” functionality so you can do that, and even do it as you use the object in a query. For example, let’s say you want to query pg_stat_activity, one of the built-in statistics views that comes with Postgres. You can use “Go to Definition” to see all its columns and understand what this view is based on.

    Write queries efficiently in Azure Data Studio

    Writing SQL queries is bread and butter when working with Postgres, whether you’re an expert or are just getting started with this RDBMS. Whoever you are, IntelliSense for SQL is integrated into Azure Data Studio to help you write your queries quicker. With IntelliSense’s context-aware code completion suggestions, you can use fewer keystrokes to get the job done.

    If you use Postgres a lot, you probably have a few SQL queries you end up reusing over and over. Whether they are detailed CREATE statements or complex SELECTs, you can templatize each one into a SQL code snippet. That way you don’t have to retype it afresh each time. Azure Data Studio inherits its code snippet functionality from Visual Studio Code. Code snippets help you avoid errors from retyping code, and overall let you develop faster.

    Customize your editor

    One advantage of modern development GUIs is the ability to customize them to suit your unique preferences. For example, in this blog we’ve used the Solarized Dark theme in screenshots. Honestly, that isn’t everyone’s cup of tea. Well there are ten more color themes you can choose from in Azure Data Studio, not to mention a high contrast option.

    The personalization options extend to key bindings as well. Don't like using the default Ctrl+N to open a new tab? You can change it. Or maybe you want a keyboard shortcut that doesn't come out of the box with Azure Data Studio. You can create and customize key bindings using the Keyboard Shortcuts editor.

    Customize your editor in Azure Data Studio

    How to get started

    There are even more features to discover, like Git source control integration and customized dashboards and widgets. You can start using the preview for PostgreSQL in Azure Data Studio today - check out the install instructions. To start using our preview PostgreSQL extension for Visual Studio Code, learn more on our GitHub page.

    These two features are in preview and your feedback is critical to making them better and making them work for you. Share your feedback on our PostgreSQL GitHub pages for Azure Data Studio or Visual Studio Code respectively.


    Visual Studio 2019 Launch Event agenda and speakers now published

    $
    0
    0

    We’re only 15 days away from the general availability of Visual Studio 2019 and our virtual Visual Studio 2019 Launch Event. It’s been incredible to see all the buzz and excitement in the community around the launch, from the 180+ local launch events happening all across the globe over the next months to all the posts about the features you’re most excited about on Twitter. Today, I’m happy to share the full agenda for the Visual Studio 2019 Launch Event with you, alongside the list of speakers.

    Pacific Daylight Time ‎(UTC-7)‎ Coordinated Universal Time ‎(UTC)‎ Session Speaker(s)
    9:00 AM 16:00 Not your average keynote Scott Hanselman & friends
    10:00 AM 17:00 Live Q&A with Visual Studio Big Wigs Amanda Silver & Joseph Hill
    10:30 AM 17:30 Write beautiful code, faster Kendra Havens
    11:00 AM 18:00 Streamline your dream dev team Allison Buchholtz-Au & Jon Chu
    11:30 AM 18:30 Squash bugs and improve code quality Leslie Richardson
    12:00 PM 19:00 Taking DevOps to the next level with GitHub and Azure DevOps Steven Borg & Stanley Goldman
    12:30 PM 19:30 AI-infused break Seth Juarez
    1:00 PM 20:00 Accelerate your C++ development Erika Sweet & Marian Luparu
    1:30 PM 20:30 Cross-platform mobile apps made easy using Xamarin James Montemagno
    2:00 PM 21:00 To the cloud with Visual Studio and Azure Andrew Hall & Paul Yuknewicz
    2:30 PM 21:30 Build amazing web apps with .NET Core Dan Roth
    3:00 PM 22:00 A tour of Visual Studio for Mac for .NET development Mikayla Hutchinson
    3:30 PM 22:30 Amazing devs doing amazing things Jeff Fritz, Ginny Caughey (MVP), & Oren Novotny (MVP)
    4:00 PM 23:00 #CodeParty Virtual Attendee Party, live on Twitch

    As you can see, we have a day packed with exciting demos and conversations with the incredible people behind the products in store. Be sure to stick around for the #CodeParty Virtual Attendee Party closing out the day on twitch.tv/visualstudio, sponsored by our amazing Visual Studio partners. There will be plenty of opportunities to talk to our team, hang out, and of course win some awesome prizes. Check out the full list of partners behind the party on the launch event website.

    Be a part of the launch celebration

    Of course, it wouldn’t be much of a celebration if we’re the only ones celebrating. We’re hoping you’ll join in on the launch, here’s how you can participate:

    • #VS2019 on Twitter
      • Let us know what your favorite feature of Visual Studio 2019 is and what you’re most excited about
      • If you spot some of the hidden gems and Easter eggs we sprinkled throughout the keynote, show off your eye for detail online
      • During the sessions between 10 AM and 4 PM (Pacific Time) ask questions and we’ll do our best to get them in front of the speakers to be answered live
    • twitch.tv/visualstudio
      • We’ll be streaming on Twitch all day (alongside YouTube and Channel 9) and we’ll have team members online to chat with and answer your questions
      • The #CodeParty at the tail-end of the launch event will be exclusively on Twitch, where you can hang out, chat with the people behind the products, and win prizes
    • Local launch events
      • There are over 180 local launch events happening across the globe between April 2nd and June 30th. Even if you can’t tune in live on April 2nd, these community-driven events will offer plenty of opportunities to learn and connect

     

    Thank you for your enthusiasm about the launch so far and we hope to see you on April 2nd!

    The post Visual Studio 2019 Launch Event agenda and speakers now published appeared first on The Visual Studio Blog.

    Bing and React on Amazon Kindles

    $
    0
    0

    Tablets are an important part of the Bing success story, with many users looking for the latest news, checking out sport scores and searching for entertainment.  To better optimize for such user intent across devices of varying screens sizes and aspect ratios while providing fluid and aesthetically pleasing experience we at Microsoft completely redesigned the Bing homepage on Amazon Fire tablets using React + Redux.

     

    This is the first major browser experience on Bing that is completely rendered on the client using React, while Redux handles the state management across tabs. Most of the browser Bing ecosystem is rendered on the server-side, however, for experiences that are highly interactive, highly adaptable and API-powered, such as feeds with personalized news stories from the web, the client-rendering is a better choice. We evaluated multiple options for client rendering and settled on React + Redux given its full range of great features, high performance and development simplicity, and since other Bing experiences were using React already (such as the camera features on our Bing iOS app), it was a natural choice for us. 

    Designing for a spectrum of different form factors can be a challenge. For example, the Fire tablet is available in a wide range of dimensions, 7'’ to 10.1'’ as seen on the Amazon Site:

    For this reason, building a truly responsive experience was paramount to us. We built the UI to be responsive to single-column or multi-column layouts — adapting to all the different Fire tablets screen sizes. Its smooth transition on the browser is what makes client-rendering and React great for such experience.

    The client is powered by a RESTFUL API that serves our data and caching needs. The API encapsulates a service built on our Microsoft Azure based micro-services platform which helps us with our goals of >99.99% availability, elastic scalability at the level of internet and geo-redundancy across five continents. In addition to reducing the data retrieval latency, we use caching to minimize the Bing API payload.

    The experience at the moment can only be accessed on Amazon Fire tablets using their Silk browser, although we’re always experimenting on other platforms too. You can see the responsiveness of the experience in the screenshots below, auto-adjusting to different device widths (in this case, self-adapting the UI to three columns, two columns and one column):

    Architecturally, we have scalable web services built using .Net Core which take and process the initial HTTPS requests from the user. The service connects with our micro-service running Node.js in Azure and our React+Redux libraries.

    To get the actual data, we have a set of RESTFUL APIs which are constantly fetching the information from our in-memory No-SQL DB, which gets populated continuously by a set of services calling the Bing APIs. Caching services allow for fault-resiliency and performance optimization. The latency between the information from the Bing APIs and the No-SQL DB is kept to a minimum to allow for fresh content to be available to the end-user (which is crucial since the feeds and trending contents are primarily news-centered).

     

    On the client side we also make use of CSS parallax so that scrolling will smoothly transition the page into full-screen, docking our search box and header together at the top. We originally created this animation by modifying the DOM on browser scroll events but found that parallax is much more lightweight and stays smooth on any hardware, so long as it has browser support. Once the user scrolls and docks (shown below), they can endlessly scroll the embedded iframe to browse their feed. When finished, one simple click on the Bing logo will return them to the default state and refresh their feed for any new available content.

     

    The feeds that you get using the Fire tablets are the same that you get using the Bing iOS or Android app. If you don’t want to get the Feeds, there is always an option in the Bing settings page to minimize it:


    In addition to the personalized feeds, our users can also enjoy a number of other content such as news, sports, recipes and much more. 

    Hope you enjoy the experience and let us know if you have any feedback. We’ll continue to develop and improve the experience — cheers!

    Bing and React on Amazon Tablets

    $
    0
    0

    Tablets are an important part of the Bing success story, with many users looking for the latest news, checking out sport scores and searching for entertainment. To better optimize for such user intent across devices of varying screens sizes and aspect ratios while providing fluid and aesthetically pleasing experience we at Microsoft completely redesigned the Bing homepage on Amazon Fire tablets using React + Redux.

     

    This is the first major browser experience on Bing that is completely rendered on the client using React, while Redux handles the state management across tabs. Most of the browser Bing ecosystem is rendered on the server-side, however, for experiences that are highly interactive, highly adaptable and API-powered, such as feeds with personalized news stories from the web, the client-rendering is a better choice. We evaluated multiple options for client rendering and settled on React + Redux given its full range of great features, high performance and development simplicity, and since other Bing experiences were using React already (such as the camera features on our Bing iOS app), it was a natural choice for us. 

    Designing for a spectrum of different form factors can be a challenge. For example, the Fire tablet is available in a wide range of dimensions, 7'’ to 10.1'’ as seen on the Amazon Site:

    For this reason, building a truly responsive experience was paramount to us. We built the UI to be responsive to single-column or multi-column layouts — adapting to all the different Fire tablets screen sizes. Its smooth transition on the browser is what makes client-rendering and React great for such experience.

    The client is powered by a RESTFUL API that serves our data and caching needs. The API encapsulates a service built on our Microsoft Azure based micro-services platform which helps us with our goals of >99.99% availability, elastic scalability at the level of internet and geo-redundancy across five continents. In addition to reducing the data retrieval latency, we use caching to minimize the Bing API payload.

    The experience at the moment can only be accessed on Amazon Fire tablets using their Silk browser, although we’re always experimenting on other platforms too. You can see the responsiveness of the experience in the screenshots below, auto-adjusting to different device widths (in this case, self-adapting the UI to three columns, two columns and one column):

    Architecturally, we have scalable web services built using .Net Core which take and process the initial HTTPS requests from the user. The service connects with our micro-service running Node.js in Azure and our React+Redux libraries.

    To get the actual data, we have a set of RESTFUL APIs which are constantly fetching the information from our in-memory No-SQL DB, which gets populated continuously by a set of services calling the Bing APIs. Caching services allow for fault-resiliency and performance optimization. The latency between the information from the Bing APIs and the No-SQL DB is kept to a minimum to allow for fresh content to be available to the end-user (which is crucial since the feeds and trending contents are primarily news-centered).

     

    On the client side we also make use of CSS parallax so that scrolling will smoothly transition the page into full-screen, docking our search box and header together at the top. We originally created this animation by modifying the DOM on browser scroll events but found that parallax is much more lightweight and stays smooth on any hardware, so long as it has browser support. Once the user scrolls and docks (shown below), they can endlessly scroll the embedded iframe to browse their feed. When finished, one simple click on the Bing logo will return them to the default state and refresh their feed for any new available content.

     

    The feeds that you get using the Fire tablets are the same that you get using the Bing iOS or Android app. If you don’t want to get the Feeds, there is always an option in the Bing settings page to minimize it:


    In addition to the personalized feeds, our users can also enjoy a number of other content such as news, sports, recipes and much more. 

    Hope you enjoy the experience and let us know if you have any feedback. We’ll continue to develop and improve the experience — cheers!

    Azure Machine Learning service now supports NVIDIA’s RAPIDS

    $
    0
    0

    Azure Machine Learning service is the first major cloud ML service to support NVIDIA’s RAPIDS, a suite of software libraries for accelerating traditional machine learning pipelines with NVIDIA GPUs.

    Just as GPUs revolutionized deep learning through unprecedented training and inferencing performance, RAPIDS enables traditional machine learning practitioners to unlock game-changing performance with GPUs. With RAPIDS on Azure Machine Learning service, users can accelerate the entire machine learning pipeline, including data processing, training and inferencing, with GPUs from the NC_v3NC_v2, ND or ND_v2 families. Users can unlock performance gains of more than 20X (with 4 GPUs), slashing training times from hours to minutes and dramatically reducing time-to-insight.

    The following figure compares training times on CPU and GPUs (Azure NC24s_v3) for a gradient boosted decision tree model using XGBoost. As shown below, performance gains increase with the number of GPUs. In the Jupyter notebook linked below, we’ll walk through how to reproduce these results step by step using RAPIDS on Azure Machine Learning service.

    image

    How to use RAPIDS on Azure Machine Learning service

    Everything you need to use RAPIDS on Azure Machine Learning service can be found on GitHub.

    The above repository consists of a master Jupyter Notebook that uses the Azure Machine Learning service SDK to automatically create a resource group, workspace, compute cluster, and preconfigured environment for using RAPIDS. The notebook also demonstrates a typical ETL and machine learning workflow to train a gradient boosted decision tree model. Users are also free to experiment with different data sizes and the number of GPUs to verify RAPIDS multi-GPU support.

    About RAPIDS

    RAPIDS uses NVIDIA CUDA for high-performance GPU execution, exposing GPU parallelism and high memory bandwidth through a user-friendly Python interface. It includes a dataframe library called cuDF which will be familiar to Pandas users, as well as an ML library called cuML that provides GPU versions of all machine learning algorithms available in Scikit-learn. And with DASK, RAPIDS can take advantage of multi-node, multi-GPU configurations on Azure.

    image

    Accelerating machine learning for all

    With the support for RAPIDS on Azure Machine Learning service, we are continuing our commitment to an open and interoperable ecosystem where developers and data scientists can use the tools and frameworks of their choice. Azure Machine Learning service users will be able to use RAPIDS in the same way they currently use other machine learning frameworks, and they will be able to use RAPIDS in conjunction with Pandas, Scikit-learn, PyTorch, TensorFlow, etc. We strongly encourage the community to try it out and look forward to your feedback!

    Microsoft and NVIDIA extend video analytics to the intelligent edge

    $
    0
    0

    Artificial Intelligence (AI) algorithms are becoming more intelligent and sophisticated every day, allowing IoT devices like cameras to bridge the physical and digital worlds. The algorithms can trigger alerts and take actions automatically — from finding available parking spots and missing items in a retail store to detecting anomalies on solar panels or workers approaching hazardous zones.

    Processing these state-of-the-art AI algorithms in a datacenter requires a stable high-bandwidth connection to deliver videos feeds to the cloud. However, these cameras are often located in remote areas with unreliable connectivity or it may not be sensible given bandwidth, security, and regulatory needs.

    Microsoft and NVIDIA are partnering on a new approach for intelligent video analytics at the edge to transform raw, high-bandwidth videos into lightweight telemetry. This delivers real-time performance and reduces compute costs for users. The “cameras-as-sensors" and edge workloads are managed locally by Azure IoT Edge and the camera stream processing is powered by NVIDIA DeepStream. Once the videos are converted, the data can be ingested to the cloud using Azure IoT Hub.

    Edge appliance and Azure cloud diagram

    The companies plan to offer customers enterprise-ready devices running DeepStream in the Azure IoT device catalog, and the NVIDIA DeepStream module will soon be made available in the Azure IoT Edge marketplace.

    Over the years, Microsoft and NVIDIA have helped customers run demanding applications on GPUs in the cloud. With this latest collaboration, NVIDIA DeepStream and Azure IoT Edge extend the AI-enhanced video analytics pipeline to where footage is captured, securely and at scale. Now, our customers can get the best of both worlds—accelerated video analytics at the edge with NVIDIA GPUs and secure connectivity and powerful device management with Azure IoT Edge and Azure IoT Hub.

    To learn more, visit the Azure IoT Edge and NVIDIA DeepStream product pages. If you are attending GTC in person, join us Tuesday, March 19, 2019 from 9:00 – 10:00 AM at session S9545 – “Using the DeepStream SDK for AI-Based Video Analytics” or visit Microsoft at Booth 1122.

    Viewing all 5971 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>