Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Choosing between Azure VNet Peering and VNet Gateways

$
0
0

As customers adopt Azure and the cloud, they need fast, private, and secure connectivity across regions and Azure Virtual Networks (VNets). Based on the type of workload, customer needs vary. For example, if you want to ensure data replication across geographies you need a high bandwidth, low latency connection. Azure offers connectivity options for VNet that cater to varying customer needs, and you can connect VNets via VNet peering or VPN gateways.

It is not surprising that VNet is the fundamental building block for any customer network. VNet lets you create your own private space in Azure, or as I call it your own network bubble. VNets are crucial to your cloud network as they offer isolation, segmentation, and other key benefits. Read more about VNet’s key benefits in our documentation “What is Azure Virtual Network?

VNet peering

VNet peering enables you to seamlessly connect Azure virtual networks. Once peered, the VNets appear as one, for connectivity purposes. The traffic between virtual machines in the peered virtual networks is routed through the Microsoft backbone infrastructure, much like traffic is routed between virtual machines in the same VNet, through private IP addresses only. No public internet is involved. You can peer VNets across Azure regions, too – all with a single click in the Azure Portal.

  • VNet peering - connecting VNets within the same Azure region
  • Global VNet peering - connecting VNets across Azure regions

An image depicting how VNet peering connects VNets.

To learn more, look at our documentation overview "Virtual network peering" and "Create, change, or delete a virtual network peering."

VPN gateways

A VPN gateway is a specific type of VNet gateway that is used to send traffic between an Azure virtual network and an on-premises location over the public internet. You can also use a VPN gateway to send traffic between VNets. Each VNet can have only one VPN gateway.

An image depicting how VPN gateways are used to send traffic via public internet.

To learn more, look at our documentation overview "What is VPN Gateway?" and "Configure a VNet-to-VNet VPN gateway connection by using the Azure portal."

Which is best for you?

While we offer two ways to connect VNets, based on your specific scenario and needs, you might want to pick one over the other.

VNet Peering provides a low latency, high bandwidth connection useful in scenarios such as cross-region data replication and database failover scenarios. Since traffic is completely private and remains on the Microsoft backbone, customers with strict data policies prefer to use VNet Peering as public internet is not involved. Since there is no gateway in the path, there are no extra hops, ensuring low latency connections.

VPN Gateways provide a limited bandwidth connection and is useful in scenarios where encryption is needed, but bandwidth restrictions are tolerable. In these scenarios, customers are also not as latency-sensitive.

VNet Peering and VPN Gateways can also co-exist via gateway transit

Gateway transit enables you to use a peered VNet’s gateway for connecting to on-premises instead of creating a new gateway for connectivity. As you increase your workloads in Azure, you need to scale your networks across regions and VNets to keep up with the growth. Gateway transit allows you to share an ExpressRoute or VPN gateway with all peered VNets and lets you manage the connectivity in one place. Sharing enables cost-savings and reduction in management overhead.

With gateway transit enabled on VNet peering, you can create a transit VNet that contains your VPN gateway, Network Virtual Appliance, and other shared services. As your organization grows with new applications or business units and as you spin up new VNets, you can connect to your transit VNet with VNet peering. This prevents adding complexity to your network and reduces management overhead of managing multiple gateways and other appliances.

An image depicting VNet peering with gateway transit.

To learn more about the powerful and unique functionality of gateway transit, refer to our blog post "Create a transit VNet using VNet peering."

Differences between VNet Peering and VPN Gateways

 

 

VNet Peering

VPN Gateways

Cross-region support?

Yes – via Global VNet Peering

 

Yes

Cross-Azure Active Directory tenant support?

Yes, learn how to set it up in our documentation "Create a virtual network peering."

Yes, see our documentation on VNet-to-VNet connections

Cross-subscription support?

Yes, see our documentation "Resource Manager, different subscriptions."

Yes, see our documentation "Configure a VNet-to-VNet VPN gateway connection by using the Azure portal."

Cross-deployment model support?

Yes, see our documentation "different deployment models, same subscription."

 

Yes, see our documentation "Connect virtual networks from different deployment models using the portal."

Limits

You can keep up to 500 VNets with one VNet as seen in the documentation on Networking Limits.

Each VNet can only have one VPN Gateway. VPN Gateways depending on the SKU have type different number of tunnel supported.

Pricing

Ingress/Egress charged.

Gateway + Egress charged.

 

Encrypted?

Software level encryption is recommended

Yes, custom IPsec/IKE policy can be created and applied to new or existing connections.

Bandwidth limitations?

No bandwidth limitations.

Varies based on type of Gateway from 100 Mbps to 1.25Gps.

 

Private?

Yes, no Public IP endpoints. Routed through Microsoft backbone and is completely private. No public internet involved.

Public IP involved.

Transitive relationship

If VNet A is peered to VNet B, and VNet B is peered to VNet C, VNet A and VNet C cannot currently communicate. Spoke to spoke communication can be achieved via NVAs or Gateways in the hub VNet. See an example in our documentation.

If VNet A, VNet B, and VNet C are connected via VPN Gateways and BGP is enabled in the VNet connections, transitivity works.

Typical customer scenarios

Data replication, database failover, and other scenarios needing frequent backups of large data.

Encryption specific scenarios that are not latency sensitive and do not need high throughout.

Initial setup time

It took me 24.38 seconds, but you should give it a shot!

30 mins to set it up

FAQ link

VNet peering FAQ

VPN gateway FAQ

Conclusion

Azure offers VNet peering and VNet gateways to connect VNets. Based on your unique scenario, you might want to pick one over the other. We recommend VNet peering within region/cross-region scenarios.

We always love to hear from you, so please feel free to provide any feedback via our forums.


Smarter Member List Filtering for C++ 

$
0
0

We are always looking for ways to make you more productive while coding in Visual Studio. In Visual Studio 2019 version 16.2, we have created a smarter, more relevant Member List. Specifically, we now apply method filtering based on type qualifiers. To illustrate this, consider the following example: 

You have two vectors, but one is constWhen we invoke the member list on the non-const vector, we see the option for push_back. However, when we invoke the member list on the const vector, Visual Studio now knows not to display any non-const members on a const object: 

 

Even More Filtering 

If you wish to benefit further from Member List and completion filtering, try out Predictive IntelliSense. As an experimental feature, Predictive IntelliSense is disabled by default, but can be enabled under Tools > Options > Text Editor > C/C++ > Experimental. Or simply type “predictive” in the (Ctrl + Q) Search Bar. Currently, Predictive IntelliSense filters by type when you’re in an argument or assignment position. In the example below, i is of type int, so Predictive IntelliSense only shows int items in the completion list. 

Note: If you ever wish to unfilter the completion list, just click the green “+” symbol in the bottom left of the list. 

Talk to Us!  

If you have feedback on Member List filtering in Visual Studio, we would love to hear from you. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with Visual Studio or MSVC or have a suggestion, you can use the Report a Problem tool in Visual Studio or head over to Visual Studio Developer Community. You can also find us on Twitter @VisualC and follow me @nickuhlenhuth.  

 

The post Smarter Member List Filtering for C++  appeared first on C++ Team Blog.

Microsoft Azure welcomes customers, partners, and industry leaders to Siggraph 2019!

$
0
0

SIGGRAPH is back in Los Angeles and so is Microsoft Azure! I hope you can join us at Booth #1351 to hear from leading customers and innovative partners.

Teradici, Bebop, Support Partners, Blender, and more will be there to showcase the latest in cloud-based rendering and media workflows:

  • See a real-time demonstration of Teradici’s PCoIP Workstation Access Software, showcasing how it enables a world-class end-user experience for graphics-accelerated applications on Azure’s NVIDIA GPUs.
  • Experience a live demonstration of industry-standard visual effects, animation, and other post-production tools on the BeBop platform. It is the leading solution for cloud-based media and entertainment workflows, creativity, and collaboration.
  • Learn more about how cloud-integrator Support Partners enables companies to run complex and exciting hybrid workflows in Azure.
  • Be the first to hear about Azure’s integration with Blender’s render manager Flamenco and how users can easily deploy a completely virtual render farm and file server. The Azure Flamenco Manager will be freely available on GitHub, and we can’t wait to hear how it is being used and get your feedback.

We’re also demonstrating how you can simplify the creation and management of hybrid cloud rendering environments, get the most of your on-prem investments while bursting to the cloud for scale on demand and increase your output with high performance GPUs. Microsoft Avere, HPC, and Batch teams will be onsite to answer your questions about these new technologies, which are all generally available at SIGGRAPH 2019.

  • Azure Render Hub simplifies the creation and management of hybrid cloud rendering environments in Azure, providing integration with your existing AWS Thinkbox Deadline or PipelineFX Qube! render farm, Tractor and OpenCue are coming soon. It also orchestrates infrastructure setup and provides pay per use licensing and governance controls, including detailed cost tracking. The Azure Render Hub web app is available from GitHub where we welcome feedback and feature requests.
  • Maximize your resources pools by integrating your existing network attached storage (NAS) and Azure Blog Storage using Azure FXT Edge Filer. This on-premises caching appliance optimizes access to data in your datacenter, in Azure, and across a wide-area network (WAN). A combination of software and hardware, Microsoft Azure FXT Edge Filer delivers high throughput and low latency for hybrid storage infrastructure supporting large rendering workloads. You can learn more by visiting the Azure FXT Edge Filer product page.
  • Support powerful remove visualization workloads and other graphics-intensive applications, using Azure NV-series VMs, backed by the NVIDIA GPUs. Large memory, support for premium disks, and hyper-threading means these VMs offer double the number of vCPUs compared to the previous generation. Learn more about the NVIDIA and Azure partnership.

The Microsoft team and our partners will also be in room #512 for our Azure Customer Showcase and Training Program.

  • Tuesday and Wednesday morning Azure engineers, software partners, and top production companies will share unique insights on cloud enabled workflows that can help you improve efficiency and lower production costs.
  • Then, in the afternoon, we have a three hour deep dive into studio workflows on Azure. This will cover everything from Azure infrastructure, networking, and storage capabilities to how to enable Avere caching technology and set up burst render environments with popular render farm managers. At the end of every training session, industry leaders will join us for a fireside chat to talk about the cloud. Seating is first-come-first-serve so get there early! Full schedule below.
    • Tuesday, July 30:  2pm – 5pm
    • Wednesday, July 31:  2pm – 5pm
    • thursday, August 1: 10am – 1pm

If you’re curious about our Xbox Adaptive Controllers, come and check them out at the Adaptive Tech area of the Experience Hall and dive deep into new technologies by adding the following Tech Talks to your agenda:

  • Monday, July 29, 2019 from 12:30pm – 2pm room 504: Living in a Virtual World - With the VFX and animation industry moving into a new frontier of studio infrastructure and pipeline, join us as we delve into the best practices of moving your studio into a virtual environment securely, efficiently and economically.
  • Tuesday, July 30, 2019 from 2pm – 3:30pm room 503: Going Cloud Native - Join a continued discussion with key representatives from the graphics community who will compare experiences and explore techniques related to pushing the production pipeline and correlated resources toward the cloud.
  • Wednesday, July 31, 2019 from 12pm – 1pm room 309: Volumetric Video Studios - Volumetric Video providers gather to discuss their experiences, challenges, and opportunities in the early days of this new medium. Where is the market now, and where will it go? Topics include successes and lessons learned so far, most/least active scenarios, creator and consumer perceptions, technology evolution, trends in the market, and predictions for the years ahead.
  • Wednesday, July 31, 2019 from 2pm to 4pm room 406A: Volumetric Video Creators - Content creators discuss the advantages of using volumetric video captures as a way to tell stories, entertain and educate, as well as lessons learned along the way. Topics covered including the funding landscape, best methods of reaching audiences, most effective storytelling methods and future creative directions.

If you haven’t registered yet or are looking for a pass, you can register now for a free guest pass using code MICROSOFT19.

We hope to see you at the show and will look forward to learning more about your projects and requirements!

Ruby on Rails on Windows is not just possible, it’s fabulous using WSL2 and VS Code

$
0
0

I've been trying on and off to enjoy Ruby on Rails development on Windows for many years. I was doing Ruby on Windows as long as 13 years ago. There's been many valiant efforts to make Rails on Windows a good experience. However, given that Windows 10 can run Linux with WSL (Windows Subsystem for Linux) and now Windows runs Linux at near-native speeds with an actual shipping Linux Kernel using WSL2, Ruby on Rails folks using Windows should do their work in WSL2.

Running Ruby on Rails on Windows

Get a recent Windows 10

WSL2 will be released later this year but for now you can easily get it by signing up for Windows Insiders Fast and making sure your version of Windows is 18945 or greater. Just run "winver" to see your build number. Run Windows Update and get the latest.

Enable WSL2

You'll want the newest Windows Subsystem for Linux. From a PowerShell admin prompt run this:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

and head over to the Windows Store and search for "Linux" or get Ubuntu 18.04 LTS directly. Download it, run it, make your sudo user.

Make sure your distro is running at max speed with WSL2. That earlier PowerShell prompt run wsl --list -v to see your distros and their WSL versions.

C:UsersScottDesktop> wsl --list -v

NAME STATE VERSION
* Ubuntu-18.04 Running 2
Ubuntu Stopped 1
WLinux Stopped 1

You can upgrade any WSL1 distro like this, and once it's done, it's done.

wsl --set-version "Ubuntu-18.04" 2

And certainly feel free to get cool fonts and styles and make yourself a nice shiny Linux experience...maybe with the Windows Terminal.

Get the Windows Terminal

Bonus points, get the new open source Windows Terminal for a better experience at the command line. Install it AFTER you've set up Ubuntu or a Linux and it'll auto-populate its menu for you. Otherwise, edit your profiles.json and make a profile with a commandLine like this:

"commandline" : "wsl.exe -d Ubuntu-18.04"

See how I'm calling wsl -d (for distro) with the short name of the distro?

Ubuntu in the Terminal Menu

Since I have a real Ubuntu environment on Windows I can just follow these instructions to set up Rails!

Set up Ruby on Rails

Ubuntu instructions work because it is Ubuntu! https://gorails.com/setup/ubuntu/18.04

Additionally, I can install as as many Linuxes as I want, even a Dev vs. Prod environment if I like. WSL2 is much lighter weight than a full Virtual Machine.

Once Rails is set up, I'll try making a new hello world:

rails new myapp

and here's the result!

Ruby on Rails in the new Windows Terminal

I can also run "explorer.exe ." and launch Windows Explorer and see and manage my Linux files. That's allowed now in WSL2 because it's running a Plan9 server for file access.

Ubuntu files inside Explorer on Windows 10

Install VS Code and the VS Code Remote Extension Pack

I'm going to install the VSCode Remote Extension pack so I can develop from Windows on remote machines OR in WSL or  Container directly. I can click the lower level corner of VS Code or check the Command Palette for this list of menu items. Here I can "Reopen Folder in WSL" and pick the distro I want to use.

Remote options in VS Code

Now that I've opened the folder for development WSL look closely at the lower left corner. You can see I'm in a WSL development mode AND Visual Studio Code is recommending I install a Ruby VS Code extension...inside WSL! I don't even have Ruby and Rails on Windows. I'm going to have the Ruby language servers and VS Code headless parts live in WSL - in Linux - where they'll be the most useful.

Ruby inside WSL

This synergy, this balance between Windows (which I enjoy) and Linux (whose command line I enjoy) has turned out to be super productive. I'm able to do all the work I want - Go, Rust, Python, .NET, Ruby - and move smoothly between environments. There's not a clear separation like there is with the "run it in a VM" solution. I can access my Windows files from /mnt/c from within Linux, and I can always get to my Linux files at \wsl$ from within Windows.

Note that I'm running rails server -b=0.0.0.0 to bind on all available IPs, and this makes Rails available to "localhost" so I can hit the Rails site from Windows! It's my machine, so it's my localhost (the networking complexities are handled by WSL2).

$ rails server -b=0.0.0.0

=> Booting Puma
=> Rails 6.0.0.rc2 application starting in development
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Version 3.12.1 (ruby 2.6.2-p47), codename: Llamas in Pajamas
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:3000
Use Ctrl-C to stop

Here it is in new Edge (chromium). So this is Ruby on Rails running in WSL, as browsed to from Windows, using the new Edge with Chromium at its heart. Cats and dogs, living together, mass hysteria.

Ruby on Rails on Windows from WSL

Even better, I can install the ruby-debug-ide gem inside WSL and now I'm doing interactive debugging from VS Code, but again, note that the "work" is happening inside WSL.

Debugging Rails on Windows

Enjoy!


Sponsor: Get the latest JetBrains Rider with WinForms designer, Edit & Continue, and an IL (Intermediate Language) viewer. Preliminary C# 8.0 support, rename refactoring for F#-defined symbols across your entire solution, and Custom Themes are all included.



© 2019 Scott Hanselman. All rights reserved.
     

Cloud providers unite on frictionless health data exchange

$
0
0

This post was co-authored by Heather Jordan Cartwright, General Manager, Microsoft Healthcare

Cloud computing is rapidly becoming a bigger and more central part of the infrastructure of healthcare. We see this as a historic shift that motivates us to think hard about how to ensure that, in this cloud-based future, interoperable health data is available as needed and without friction.

Microsoft continues to build health data interoperability into the core of the Azure cloud, empowering developers and partners to easily build data-rich health apps with the Azure API for FHIR®. We are also actively contributing to healthcare community with open source software like the FHIR Server for Azure, bringing together developers on collaborative solutions that move the industry forward.

We take interoperability seriously. At last summer’s CMS Blue Button Developer Conference, we made a public commitment to promote the frictionless exchange of health data with our counterparts at AWS, Google, IBM, Salesforce and Oracle. That commitment remains strong.

Today, at the same conference of health IT community leaders, we are sharing a joint announcement that showcases how we have moved from principles and commitment to actions. Our activities over the past year include open-source software releases, development of new standards and implementation guides, and deployment of services that support U.S. federal interoperability mandates.

Here’s the full text of our joint announcement:


As healthcare evolves across the globe, so does our ability to improve the health and wellness of communities. Patients, providers, and health plans are striving for more value-based care, more engaging user experiences, and broader application of machine learning to assist clinicians in diagnosis and patient care.

Too often, however, patient data are inconsistently formatted, incomplete, unavailable, or missing – which can limit access to the best possible care. Equipping patients and caregivers with information and insights derived from raw data has the potential to yield significantly better outcomes. But without a robust network of clinical information, even the best people and technology may not reach their potential.

Interoperability requires the ability to share clinical information across systems, networks, and care providers. Barriers to data interoperability sit at the core of many process problems. We believe that better interoperability will unlock improvements in individual and population-level care coordination, delivery, and management. As such, we support efforts from ONC and CMS to champion greater interoperability and patient access.

This year's proposed rules focus on the use of HL7® FHIR® (Fast Healthcare Interoperability Resources) as an open standard for electronically exchanging healthcare information. FHIR builds on concepts and best-practices from other standards to define a comprehensive, secure, and semantically-extensible specification for interoperability. The FHIR community features multidisciplinary collaboration and public channels where developers interact and contribute.

We’ve been excited to use and contribute to many FHIR-focused, multi-language tools that work to solve real-world implementation challenges. We are especially proud to highlight a set of open-source tools including: Google’s FHIR protocol buffers and Apigee Health APIx, Microsoft’s FHIR Server for Azure, Cerner's FHIR integration for Apache Spark, a serverless reference architecture for FHIR APIs on AWS, Salesforce/Mulesoft's Catalyst Accelerator for Healthcare templates, and IBM’s Apache Spark service.

Beyond the production of new tools, we have also proudly participated in developing new specifications including the Bulk Data $export operation (and recent work on an $import operation), Subscriptions, and analytical SQL projections. All of these capabilities demonstrate the strength and adaptability of the FHIR specification. Moreover, through connectathons, community events, and developer conferences, our engineering teams are committed to the continued improvement of the FHIR ecosystem. Our engineering organizations have previously supported the maturation of standards in other fields and we believe FHIR version R4 — a normative release — provides an essential and appropriate target for ongoing investments in interoperability.

We have seen the early promise of standards-based APIs from market leading Health IT systems, and are excited about a future where such capabilities are universal. Together, we operate some of the largest technical infrastructure across the globe serving many healthcare and non-healthcare systems alike. Through that experience, we recognize the scale and complexity of the task at hand. We believe that the techniques required to meet the objectives of ONC and CMS are available today and can be delivered cost-effectively with well-engineered systems.

As a technology community, we believe that a forward-thinking API strategy as outlined in the proposed rules will advance the ability for all organizations to build and deploy novel applications to the benefit of patients, care providers, and administrators alike. ONC and CMS’s continued leadership, thoughtful rules, and embrace of open standards help move us decisively in that direction.

Signed,
Amazon, Google, IBM, Microsoft, Oracle, and Salesforce


The positive collaboration on open FHIR standards and the urgency for data interoperability have strengthened our commitment to an open-source-first approach in healthcare technology. We continue to incorporate feedback from the community to develop new features, and are actively identifying new places where open source software can help accelerate interoperability.

Support from the ONC and CMS in 2019 to adopt FHIR APIs as a foundation for clinical data interoperability will have a profound and positive effect on the industry. Looking forward, the application of FHIR to healthcare financial data including claims, explanation of benefit, insurance coverage, and network participation will continue to accelerate interoperability at scale and open new pathways for machine learning.

While it’s still early, we’ve seen our partners leveraging FHIR to better coordinate care, to develop innovative global health tracking systems for super-bacteria, and to proactively prevent the need for patients undergoing chemotherapy to be admitted to the emergency room. FHIR is providing a foundational platform on which our partners can drive rapid innovation, and it inspires us to work even harder to deliver technology that makes interoperable data a reality.

We’re just beginning to see what is possible in this new world of frictionless health data exchange, and we’d love for you to join us. If you want to participate, comment or learn more about FHIR, you can reach our FHIR Community chat here.

Run Windows Server and SQL Server workloads seamlessly across your hybrid environments

$
0
0

In recent weeks, we’ve been talking about the many reasons why Windows Server and SQL Server customers choose Azure. Security is a major concern when moving to the cloud, and Azure gives you the tools and resources you need to address those concerns. Innovation in data can open new doors as you move to the cloud, and Azure offers the easiest cloud transition, especially for customers running on SQL Server 2008 or 2008 R2 with concerns about end of support. Today we’re going to look at another critical decision point for customers as they move to the cloud. How easy is it to combine new cloud resources with what you already have on-premises? Many Windows Server and SQL Server customers choose Azure for its industry leading hybrid capabilities.

Microsoft is committed to enabling a hybrid approach to cloud adoption. Our commitment and passion stems from a deep understanding of our customers and their businesses over the past several decades. We understand that customers have business imperatives to keep certain workloads and data on premises, and our goal is to meet them where they are and prepare them for the future by providing the right technologies for every step along the way. That’s why we designed and built Azure to be hybrid from the beginning and have been delivering continuous innovation to help customers operate their hybrid environments seamlessly across on-premises, cloud and edge. Enterprise customers are choosing Azure for their Windows Server and SQL Server workloads. In fact, in a 2019 Microsoft survey of 500 enterprise customers, when those customers were asked about their migration plans for Windows Sever, they were 30 percent more likely to choose Azure.

Customers trust Azure to power their hybrid environments

Take Komatsu as an example. Komatsu achieved 49 percent cost reduction and nearly 30 percent performance gain by moving on-premises applications to Azure SQL Database Managed Instance and building a holistic data management and analytics solutions across their hybrid infrastructure.

Operating a $15 billion enterprise, Smithfield Foods slashed datacenter costs by 60 percent and accelerated application delivery from two months to one day using a hybrid cloud model built on Azure. Smithfield has factories and warehouses often in rural areas that have less than ideal internet bandwidth. It relies on Azure ExpressRoute to connect their major office locations globally to Azure to gain the flexibility and speed needed.

The government of Malta built a complete hybrid cloud eco-system powered by Azure and Azure Stack to modernize its infrastructure. This hybrid architecture, combined with a robust billing platform and integrated self-service backup, brings new level of flexibility and agility to the Maltese government operations, while also providing citizens and businesses more efficient services that they can access whenever they want.

Let’s look at some of Azure’s unique built-in hybrid capabilities.

Bringing the cloud to local datacenters with Azure Stack

Azure Stack, our unparalleled hybrid offering, lets customers build and run cloud-native applications with Azure services in their local datacenters or in disconnected locations. Today, it’s available in 92 countries and customers like Airbus Defense & Space, iMOKO, and KPMG Norway are using Azure Stack to bring cloud benefits on-premises.

We recently introduced Azure Stack HCI solutions so customers can run virtualized applications on-premises in a familiar way and enjoy easy access to off-the-shelf Azure management services such as backup and disaster recovery.

With Azure, Azure Stack, and Azure Stack HCI, Microsoft is the only cloud provider in the market that offers a comprehensive set of hybrid solutions.

Modernizing server management with Windows Admin Center

Windows Admin Center, a modern browser-based application free of charge, allows customers to manage Windows Servers on-premises, in Azure, or in other clouds. With Windows Admin Center, customers can easily access Azure management services to perform tasks such as disaster recovery, backup, patching, and monitoring. Since its launch just over a year ago, Windows Admin Center has seen tremendous momentum, managing more than 2.5 million server nodes each month.

Screenshot of the Windows Admin Center - Azure Hybrid Center

Easily migrating on-premises SQL Server to Azure

Azure SQL Database is a fully managed and intelligent database service.  SQL Database is evergreen, so it’s always up to date: no more worry about patching, upgrades or End of Support. Azure SQL Database Managed Instance has the full surface area of the SQL Server database engine in Azure. Customers use Managed Instance to migrate SQL Server to Azure without changing the application code. Because the service is consistent with on-premises SQL Server, customers can continue using familiar features, tools and resources in Azure.

With SQL Database Managed Instance, customers like Komatsu, Carlsberg Group, and AllScripts were able to quickly migrate SQL databases to Azure with minimal downtime and benefit from built-in PaaS capabilities such as automatic patching, backup, and high availability.

Connecting hybrid environments with fast and secure networking services

Customers build extremely fast private connections between Azure and local infrastructure, allowing both to and through access using Azure ExpressRoute at bandwidths up to 100 Gbps. Azure Virtual WAN makes it possible to quickly add and connect thousands of branch sites by automating configuration and connectivity to Azure and for global transit across customer sites, using the Microsoft global network.

Customers are also taking full advantage of services like Azure Firewall, Azure DDoS Protection, and Azure Front Door Service to secure virtual networks and deliver the best application performance experience to users.

Managing anywhere access with a single identity platform

Over 90 percent of enterprise customers use Active Directory on-premises. With Azure, customers can easily connect on-premises Active Directory with Azure Active Directory to provide seamless directory services for all Office 365 and Azure services. Azure Active Directory gives users a single sign-on experience across cloud, mobile and on-premises applications, and secures data from unauthorized access without compromising productivity.

Innovating continuously at the edge

Customers are extending their hybrid environments to the edge so they can take on new business opportunities. Microsoft has been leading the innovation in this space. The following are some examples.

Azure Data Box Edge provides a cloud managed compute platform for containers at the edge, enabling customers to process data at the edge and accelerate machine learning workloads. Data Box Edge also enables customers to transfer data over the internet to Azure in real-time for deeper analytics, model re-training at cloud scale or long-term storage.

At Microsoft Build 2019, we announced Azure SQL Database Edge as available in preview, to bring SQL engine to the edge. Developers will now be able to adopt a consistent programming surface area to develop on a SQL database and run the same code on-premises, in the cloud, or at the edge.

Get started – Integrate your hybrid environments with Azure

Check out the resources on Azure hybrid such as overviews, videos, and demos so you can learn more about how to use Azure to run Windows Server and SQL Server workloads successfully across your hybrid environments.

Microsoft ML Server 9.4 now available

$
0
0

Microsoft Machine Learning Server, the enhanced deployment platform for R and Python applications, has been updated to version 9.4. This update includes the open source R 3.5.2 and Python 3.7.1 engines, and supports integration with Spark 2.4. Microsoft ML Server also includes specialized R packages and Python modules focused on application deployment, scalable machine learning, and integration with SQL Server.

Microsoft Machine Learning Server is used by organizations that need to use R and/or Python code in production applications. For some examples of deployments, take a look at these open-source solution templates for credit risk estimation, energy demand forecasting, fraud detection and many other applications.

MLServer

Microsoft ML Server 9.4 is available now. For more details on this update, take a look at the announcement at the link below.

SQL Server Blog: Microsoft Machine Learning Server 9.4 is now available 

Windows 10 SDK Preview Build 18945 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18945 or greater). The Preview SDK Build 18945 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the Microsoft Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
  • This build of the Windows SDK will install on only on Windows 10 Insider Preview builds.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following static URL: https://software-download.microsoft.com/download/sg/Windows_InsiderPreview_SDK_en-us_18945_1.iso.

Tools Updates

Message Compiler (mc.exe)

  • Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. Otherwise, if it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. If the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
  • Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).

Windows Trace Preprocessor (tracewpp.exe)

  • Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
  • Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
  • Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.

TraceLoggingProvider.h

  • Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
  • In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.

Signing your apps with Device Guard Signing

  • We are making it easier for you to sign your app. Device Guard signing is a Device Guard feature that is available in Microsoft Store for Business and Education. Signing allows enterprises to guarantee every app comes from a trusted source. Our goal is to make signing your MSIX package easier.

Breaking Changes

Removal of IRPROPS.LIB

In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.

API Updates, Additions and Removals

The following APIs have been added to the platform since the release of Windows 10 SDK, version 1903, build 18362.

Additions:

 

 namespace Windows.Devices.Input {
  public sealed class PenButtonListener
  public sealed class PenDockedEventArgs
  public sealed class PenDockListener
  public sealed class PenTailButtonClickedEventArgs
  public sealed class PenTailButtonDoubleClickedEventArgs
  public sealed class PenTailButtonLongPressedEventArgs
  public sealed class PenUndockedEventArgs
}
namespace Windows.Devices.Sensors {
  public sealed class Accelerometer {
    AccelerometerDataThreshold ReportThreshold { get; }
  }
  public sealed class AccelerometerDataThreshold
  public sealed class Altimeter {
    AltimeterDataThreshold ReportThreshold { get; }
  }
  public sealed class AltimeterDataThreshold
  public sealed class Barometer {
    BarometerDataThreshold ReportThreshold { get; }
  }
  public sealed class BarometerDataThreshold
  public sealed class Compass {
    CompassDataThreshold ReportThreshold { get; }
  }
  public sealed class CompassDataThreshold
  public sealed class Gyrometer {
    GyrometerDataThreshold ReportThreshold { get; }
  }
  public sealed class GyrometerDataThreshold
  public sealed class Inclinometer {
    InclinometerDataThreshold ReportThreshold { get; }
  }
  public sealed class InclinometerDataThreshold
  public sealed class LightSensor {
    LightSensorDataThreshold ReportThreshold { get; }
  }
  public sealed class LightSensorDataThreshold
  public sealed class Magnetometer {
    MagnetometerDataThreshold ReportThreshold { get; }
  }
  public sealed class MagnetometerDataThreshold
}
namespace Windows.Foundation.Metadata {
  public sealed class AttributeNameAttribute : Attribute
  public sealed class FastAbiAttribute : Attribute
  public sealed class NoExceptionAttribute : Attribute
}
namespace Windows.Graphics.Capture {
  public sealed class GraphicsCaptureSession : IClosable {
    bool IsCursorCaptureEnabled { get; set; }
  }
}
namespace Windows.Management.Deployment {
  public sealed class AddPackageOptions
  public enum DeploymentOptions : uint {
    StageInPlace = (uint)4194304,
  }
  public sealed class PackageManager {
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> AddPackageByUriAsync(Uri packageUri, AddPackageOptions options);
    IIterable<Package> FindProvisionedPackages();
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackageByUriAsync(Uri manifestUri, RegisterPackageOptions options);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> RegisterPackagesByFullNameAsync(IIterable<string> packageFullNames, DeploymentOptions deploymentOptions);
    IAsyncOperationWithProgress<DeploymentResult, DeploymentProgress> StagePackageByUriAsync(Uri packageUri, StagePackageOptions options);
  }
  public enum PackageTypes : uint {
   All = (uint)4294967295,
  }
  public sealed class RegisterPackageOptions
  public enum RemovalOptions : uint {
    PreserveRoamableApplicationData = (uint)128,
  }
  public sealed class StagePackageOptions
}
namespace Windows.Media.Capture {
  public sealed class MediaCapture : IClosable {
    MediaCaptureRelativePanelWatcher CreateRelativePanelWatcher(StreamingCaptureMode captureMode, DisplayRegion displayRegion);
  }
  public sealed class MediaCaptureRelativePanelWatcher : IClosable
}
namespace Windows.Media.Capture.Frames {
  public sealed class MediaFrameSourceInfo {
    Panel GetRelativePanel(DisplayRegion displayRegion);
  }
}
namespace Windows.Media.Devices {
  public sealed class PanelBasedOptimizationControl
}
namespace Windows.Media.MediaProperties {
  public static class MediaEncodingSubtypes {
    public static string Pgs { get; }
    public static string Srt { get; }
    public static string Ssa { get; }
    public static string VobSub { get; }
  }
  public sealed class TimedMetadataEncodingProperties : IMediaEncodingProperties {
    public static TimedMetadataEncodingProperties CreatePgs();
    public static TimedMetadataEncodingProperties CreateSrt();
    public static TimedMetadataEncodingProperties CreateSsa(byte[] formatUserData);
    public static TimedMetadataEncodingProperties CreateVobSub(byte[] formatUserData);
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
  public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
}
namespace Windows.Networking.NetworkOperators {
  public interface INetworkOperatorTetheringAccessPointConfiguration2
  public interface INetworkOperatorTetheringManagerStatics4
  public sealed class NetworkOperatorTetheringAccessPointConfiguration : INetworkOperatorTetheringAccessPointConfiguration2 {
    TetheringWiFiBand Band { get; set; }
    bool IsBandSupported(TetheringWiFiBand band);
    IAsyncOperation<bool> IsBandSupportedAsync(TetheringWiFiBand band);
  }
  public sealed class NetworkOperatorTetheringManager {
    public static void DisableTimeout(TetheringTimeoutKind timeoutKind);
    public static IAsyncAction DisableTimeoutAsync(TetheringTimeoutKind timeoutKind);
    public static void EnableTimeout(TetheringTimeoutKind timeoutKind);
    public static IAsyncAction EnableTimeoutAsync(TetheringTimeoutKind timeoutKind);
    public static bool IsTimeoutEnabled(TetheringTimeoutKind timeoutKind);
    public static IAsyncOperation<bool> IsTimeoutEnabledAsync(TetheringTimeoutKind timeoutKind);
  }
  public enum TetheringTimeoutKind
  public enum TetheringWiFiBand
}
namespace Windows.Security.Authentication.Web.Core {
  public sealed class WebAccountMonitor {
    event TypedEventHandler<WebAccountMonitor, WebAccountEventArgs> AccountPictureUpdated;
  }
}
namespace Windows.Storage {
  public sealed class StorageFile : IInputStreamReference, IRandomAccessStreamReference, IStorageFile, IStorageFile2, IStorageFilePropertiesWithAvailability, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFile> GetFileFromPathForUserAsync(User user, string path);
  }
  public sealed class StorageFolder : IStorageFolder, IStorageFolder2, IStorageFolderQueryOperations, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFolder> GetFolderFromPathForUserAsync(User user, string path);
  }
}
namespace Windows.Storage.Provider {
  public static class StorageProviderSyncRootManager {
    public static bool IsSupported();
  }
}
namespace Windows.System {
  public sealed class UserChangedEventArgs {
    IVectorView<UserWatcherUpdateKind> ChangedPropertyKinds { get; }
  }
  public enum UserWatcherUpdateKind
}
namespace Windows.UI.Composition.Interactions {
  public sealed class InteractionTracker : CompositionObject {
    int TryUpdatePosition(Vector3 value, InteractionTrackerClampingOption option, InteractionTrackerPositionUpdateOption posUpdateOption);
  }
  public enum InteractionTrackerPositionUpdateOption
}
namespace Windows.UI.Composition.Particles {
  public sealed class ParticleAttractor : CompositionObject
  public sealed class ParticleAttractorCollection : CompositionObject, IIterable<ParticleAttractor>, IVector<ParticleAttractor>
  public class ParticleBaseBehavior : CompositionObject
  public sealed class ParticleBehaviors : CompositionObject
  public sealed class ParticleColorBehavior : ParticleBaseBehavior
  public struct ParticleColorBinding
  public sealed class ParticleColorBindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleColorBinding>>, IMap<float, ParticleColorBinding>
  public enum ParticleEmitFrom
  public sealed class ParticleEmitterVisual : ContainerVisual
  public sealed class ParticleGenerator : CompositionObject
  public enum ParticleInputSource
  public enum ParticleReferenceFrame
  public sealed class ParticleScalarBehavior : ParticleBaseBehavior
  public struct ParticleScalarBinding
  public sealed class ParticleScalarBindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleScalarBinding>>, IMap<float, ParticleScalarBinding>
  public enum ParticleSortMode
  public sealed class ParticleVector2Behavior : ParticleBaseBehavior
  public struct ParticleVector2Binding
  public sealed class ParticleVector2BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector2Binding>>, IMap<float, ParticleVector2Binding>
  public sealed class ParticleVector3Behavior : ParticleBaseBehavior
  public struct ParticleVector3Binding
  public sealed class ParticleVector3BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector3Binding>>, IMap<float, ParticleVector3Binding>
  public sealed class ParticleVector4Behavior : ParticleBaseBehavior
  public struct ParticleVector4Binding
  public sealed class ParticleVector4BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector4Binding>>, IMap<float, ParticleVector4Binding>
}
namespace Windows.UI.Input {
  public sealed class CrossSlidingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class DraggingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class GestureRecognizer {
    uint HoldMaxContactCount { get; set; }
    uint HoldMinContactCount { get; set; }
    float HoldRadius { get; set; }
    TimeSpan HoldStartDelay { get; set; }
    uint TapMaxContactCount { get; set; }
    uint TapMinContactCount { get; set; }
    uint TranslationMaxContactCount { get; set; }
    uint TranslationMinContactCount { get; set; }
  }
  public sealed class HoldingEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationCompletedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class ManipulationInertiaStartingEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationStartedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class ManipulationUpdatedEventArgs {
    uint ContactCount { get; }
    uint CurrentContactCount { get; }
  }
  public sealed class RightTappedEventArgs {
    uint ContactCount { get; }
  }
  public sealed class SystemButtonEventController : AttachableInputObject
  public sealed class SystemFunctionButtonEventArgs
  public sealed class SystemFunctionLockChangedEventArgs
  public sealed class SystemFunctionLockIndicatorChangedEventArgs
  public sealed class TappedEventArgs {
    uint ContactCount { get; }
  }
}
namespace Windows.UI.Input.Inking {
  public sealed class InkModelerAttributes {
    bool UseVelocityBasedPressure { get; set; }
  }
}
namespace Windows.UI.Text.Core {
  public sealed class CoreTextServicesManager {
    public static TextCompositionKind TextCompositionKind { get; }
  }
  public enum TextCompositionKind
}
namespace Windows.UI.ViewManagement {
  public sealed class ApplicationView {
    ScreenCaptureDisabledBehavior ScreenCaptureDisabledBehavior { get; set; }
  }
  public enum ApplicationViewMode {
    Spanning = 2,
  }
  public enum ScreenCaptureDisabledBehavior
  public sealed class UISettings {
    event TypedEventHandler<UISettings, UISettingsAnimationsEnabledChangedEventArgs> AnimationsEnabledChanged;
    event TypedEventHandler<UISettings, UISettingsMessageDurationChangedEventArgs> MessageDurationChanged;
  }
  public sealed class UISettingsAnimationsEnabledChangedEventArgs
  public sealed class UISettingsMessageDurationChangedEventArgs
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    event TypedEventHandler<CoreInputView, CoreInputViewHidingEventArgs> PrimaryViewHiding;
    event TypedEventHandler<CoreInputView, CoreInputViewShowingEventArgs> PrimaryViewShowing;
  }
  public sealed class CoreInputViewHidingEventArgs
  public enum CoreInputViewKind {
    Symbols = 4,
  }
  public sealed class CoreInputViewShowingEventArgs
  public sealed class UISettingsController
}
namespace Windows.UI.WindowManagement {
  public sealed class AppWindow {
    void SetPreferredTopMost();
    void SetRelativeZOrderBeneath(AppWindow appWindow);
  }
  public sealed class AppWindowChangedEventArgs {
    bool DidOffsetChange { get; }
  }
  public enum AppWindowPresentationKind {
    Snapped = 5,
    Spanning = 4,
  }
  public sealed class SnappedPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class SpanningPresentationConfiguration : AppWindowPresentationConfiguration
}
namespace Windows.UI.Xaml.Controls {
  public class HandwritingView : Control {
    UIElement HostUIElement { get; set; }
    public static DependencyProperty HostUIElementProperty { get; }
    CoreInputDeviceTypes InputDeviceTypes { get; set; }
    bool IsSwitchToKeyboardButtonVisible { get; set; }
    public static DependencyProperty IsSwitchToKeyboardButtonVisibleProperty { get; }
    double MinimumColorDifference { get; set; }
    public static DependencyProperty MinimumColorDifferenceProperty { get; }
    bool PreventAutomaticDismissal { get; set; }
    public static DependencyProperty PreventAutomaticDismissalProperty { get; }
    bool ShouldInjectEnterKey { get; set; }
    public static DependencyProperty ShouldInjectEnterKeyProperty { get; }
    event TypedEventHandler<HandwritingView, HandwritingViewCandidatesChangedEventArgs> CandidatesChanged;
    event TypedEventHandler<HandwritingView, HandwritingViewContentSizeChangingEventArgs> ContentSizeChanging;
    void SelectCandidate(uint index);
    void SetTrayDisplayMode(HandwritingViewTrayDisplayMode displayMode);
  }
  public sealed class HandwritingViewCandidatesChangedEventArgs
  public sealed class HandwritingViewContentSizeChangingEventArgs
  public enum HandwritingViewTrayDisplayMode
}
namespace Windows.UI.Xaml.Core.Direct {
  public enum XamlEventIndex {
    HandwritingView_ContentSizeChanging = 321,
  }
  public enum XamlPropertyIndex {
    HandwritingView_HostUIElement = 2395,
    HandwritingView_IsSwitchToKeyboardButtonVisible = 2393,
    HandwritingView_MinimumColorDifference = 2396,
    HandwritingView_PreventAutomaticDismissal = 2397,
    HandwritingView_ShouldInjectEnterKey = 2398,
  }
}

The post Windows 10 SDK Preview Build 18945 available now! appeared first on Windows Developer Blog.


Understanding and leveraging Azure SQL Database’s SLA

$
0
0

When data is the lifeblood of your business, you want to ensure your databases are reliable, secure, and available when called upon to perform. Service level agreements (SLA) set an expectation for uptime and performance, and are a key input for designing systems to meet business needs. We recently published a new version of the SQL Database SLA, guaranteeing the highest availability among relational database services as well as introducing the industry’s first business continuity SLA. These updates further cement our commitment to ensuring your data is safe and the apps and processes your business relies upon continue running in the face of a disruptive event.

As we indicated in the recent service update, we made two major changes in the SLA. First, Azure SQL Database now offers a 99.995% availability SLA for zone redundant databases in its business critical tier. This is the highest SLA in the industry among all relational database services. It is also backed by up to a 100% monthly cost credit for when the SLA is not maintained. Second, we offer a business continuity SLA for databases in the business critical tier that are geo-replicated between two different Azure regions. That SLA comes with very strong guarantees of a five second recovery point objective (RPO) and a 30 second recovery time objective (RTO), including a 100% monthly cost credit when the SLA is not maintained. Azure SQL Database is the only relational database service in the industry offering a business continuity SLA.

The following table provides a quick side by side comparison of different cloud vendors’ SLAs.

Platform Availability Business continuity
Uptime Max Credit RTO Max Credit RPO Max Credit
Azure SQL Database 99.995% 100% 30 seconds 100% 5 seconds 100%
AWS RDS 99.95% 100% n/a n/a n/a n/a
GCP Cloud SQL 99.95% 50% n/a n/a n/a n/a
Alibaba ApsaraDB 99.9% 25% n/a n/a n/a n/a
Oracle cloud 99.99% 25% n/a n/a n/a n/a

Data current as of July 18, 2019 and subject to change without notice.

Understanding availability SLA

The availability SLA reflects SQL Database’s ability to automatically handle disruptive events that periodically occur in every region. It relies on the in-region redundancy of the compute and storage resources, constant health monitoring and self-healing operations using automatic failover within the region. These operations rely on synchronously replicated data and incur zero data loss. Therefore, uptime is the most important metric for availability. Azure SQL Database will continue to offer a baseline 99.99% availability SLA across all of its service tiers, but is now providing a higher 99.995% SLA for the business critical or premium tiers in the regions that support availability zones. The business critical tier, as the name suggests, is designed for the most demanding applications, both in terms of performance and reliability. By integrating this service tier with Azure availability zones (AZ), we leverage the additional fault tolerance and isolation that AZs provide, which in turn allows us to offer a higher availability guarantee using the compute and storage redundancy across AZs and the same self-healing operations. Because the compute and storage redundancy is built in for business critical databases and elastic pools, using availability zones comes at no additional cost to you. Our documentation, “High-availability and Azure SQL Database” provides more details of how the business critical service tier leverages availability zones. You can also find the list of regions that support AZs in our documentation, “What are Availability Zones in Azure.”

99.99% availability means that for any database, including those in the business critical tier, the downtime should not exceed 52.56 minutes per year. Zone redundancy increases availability to 99.995%, which means a maximum downtime of only 26.28 minutes per year or a 50% reduction. A minute of downtime is defined as the period during which all attempts to establish a connection failed. To achieve this level of availability, all you need to do is select zone redundant configuration when creating a business critical database or elastic pool. You can do so programmatically using a create or update database API, or in Azure portal as illustrated in the following diagram.

Screenshot of create or update database API in Azure portal

We recommend using the Gen5 compute generation because the zone redundant capacity is based on Gen5 in most regions. The conversion to a zone redundant configuration is an asynchronous online process, similar to what happens when you change the service tier or compute size of the database. It does not require acquiescing or taking your application offline. As long as your connectivity logic is properly implemented, your application will not be interrupted during this transition.

Understanding business continuity SLA

Business continuity is the ability of a service to quickly recover and continue to function during catastrophic events with an impact that cannot be mitigated by the in-region self-healing operations. While these types of unplanned events are rare, their impact can be dramatic. Business continuity is implemented by provisioning stand-by replicas of your databases in two or more geographically separated locations. Because of the long distances between those locations, asynchronous data replication is used to avoid performance impact from network latency. The main trade-off of using asynchronous replication is the potential for data loss. The active geo-replication feature in SQL Database is designed to enable business continuity by creating and managing geographically redundant databases. It’s been in production for several years and we have plenty of telemetry to support very aggressive guarantees.

There are two common metrics used to measure the impact of business continuity events. Recovery time objective (RTO) measures how quickly the availability of the application can be restored. Recovery point objective (RPO) measures the maximum expected data loss after the availability is restored. Not only do we provide SLAs of five seconds for RPO and 30 seconds for RTO, but we also offer an industry first, 100% service credit if these SLAs are not met. That means if any of your database failover requests do not complete within 30 seconds or any time the replication lag exceeds five seconds in 99th percentile within an hour, you are eligible for a service credit for 100% of the monthly cost of the secondary database in question. To qualify for the service credit, the secondary database must have the same compute size as the primary. Note however, these metrics should not be interpreted as a guarantee of automatic recovery from a catastrophic outage. They reflect the Azure SQL’s reliability and performance when synchronizing your data and the speed of the failover when your application requests it. If you prefer a fully automated recovery process, you should consider auto-failover groups with automatic failover policy, which has a one hour RTO.

To measure the duration of the failover request, i.e. the RTO compliance, you can use the following query against the sys.dm_operation_status in master database on the secondary server. Please be aware that the operation status information is only kept for 24 hours.

SELECT  datediff(s, start_time, last_modify_time) as [Failover time in seconds] FROM sys.dm_operation_status    WHERE major_resource_id = '<my_secondary_db_name>',  operation=’ALTER DATABASE FORCE FAILOVER ALLOW DATA LOSS ’, state=2 ORDER BY start_time DESC;

The following query against sys.dm_replication_link_status in the primary database will show replication lag in seconds, i.e. the RPO compliance, for the secondary database created on partner_server. You should run the same query every 30 seconds or less to have a statistically significant set of measurements per hour.

SELECT link_guid, partner_server, replication_lag_sec FROM sys.dm_replication_link_status

Combining availability and business continuity to build mission critical applications

What does the updated SLA mean to you in practical terms? Our goal is enabling you to build highly resilient and reliable services on Azure, backed by SQL Database. But for some mission critical applications, even 26 minutes of downtime per year may not be acceptable. Combining a zone redundant database configuration with a business continuity design creates an opportunity to further increase availability for the application. This SLA release is the first step toward realizing that opportunity.

Azure Cost Management updates – July 2019

$
0
0

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Microsoft Azure Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Azure Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:

Let's dig into the details.

 

Azure Cost Management for partners

Partners play a critical role in successful planning, implementation, and long-term cloud operations for organizations, big and small. Whether you're a partner who sells to or manages Azure on behalf of another organization or you're working with a partner to help keep you focused on your core mission instead of managing infrastructure, you need a way to understand, control, and optimize your cloud costs. This is where Azure Cost Management comes in!

In June, we announced new capabilities in the Cloud Solution Provider (CSP) program coming in October 2019. With this update, CSP partners can onboard customers using the same Microsoft Customer Agreement (MCA) platform used across Azure. CSP partners and customers will see product alignment, which includes common Azure Cost Management tools, available at the same time they're available for pay-as-you-go (PAYG) and enterprise customers.

Azure Cost Management capabilities optimized for partners and their customers will be released over time, starting with the ability to enable Azure Cost Management for MCA customers. You'll see periodic updates throughout Q4 2019 and 2020, including support for customers who do not transition to MCA. Once enabled, partners and customers will have the full benefits of Azure Cost Management.

If you're a managed service provider, be sure to check out Azure Lighthouse, which enables partners to more efficiently manage resources at scale across customers and directories. Help your customers manage their Azure and AWS costs in a single place with Azure Cost Management!

Stay tuned for more updates in October 2019. We're eager to bring much-anticipated Azure Cost Management capabilities to partners and their customers!

 

Marketplace usage for pay-as-you-go (PAYG) subscriptions

Last month, we talked about how effective cost management starts by getting all your costs into a single place with a single taxonomy. Now, with the addition of Azure Marketplace usage for pay-as-you-go (PAYG) subscriptions, you have a more complete picture of your costs.

Azure and Marketplace charges have different billing cycles. To investigate and reconcile billed charges, select the appropriate Azure or Marketplace invoice period in date picker. To view all charges together, select calendar months and group by publisher type to see a breakdown of your Azure and Marketplace costs.

An image showing marketplace PAYG filters.

 

Cost Management Labs

Cost Management Labs are the way to get the latest cost management features and enhancements! It is the same great service you're used to, but with a few extra features we're testing and looking for feedback on as we finalize before releasing to the world. This is your chance to drive the direction and impact the future of Azure Cost Management.

Participating in Cost Management Labs is as easy as opening the Azure preview portal and selecting Cost Management from Azure Home. On the Cost Management overview, you'll see the preview features available for testing and have links to share new ideas or report any bugs that may pop up. Reporting a bug is a direct line back to the Azure Cost Management engineering team, where we'll work with you to understand and resolve the issue.

Here's what you'll see in Cost Management Labs today:

  • Save and share customized views directly within cost analysis
  • Download your customized view in cost analysis as an image
  • Several small bug fixes and improvements, like minor design changes within cost analysis

Of course, that's not all! There's more coming and we're very eager to hear your thoughts and understand what you'd like to see next. What are you waiting for? Try Cost Management Labs today!

An image showing the Cost Management Labs overview tab. 

Save and share customized views in cost analysis

Customizing a view in cost analysis is easy. Just pick the date range you need, group the data to see a breakdown, choose the right visualization, and you're good to go! Pin your view to a dashboard for one-click access, then share the dashboard with your team so everyone can track cost from a single place.

An image showing how to use the pin button to save customized views in cost analysis.

You can also share a direct link to your customized view so others can copy and personalize it for themselves:

An image showing how to share customized views in cost analysis.

Both sharing options offer flexibility, but you need something more convenient. You need to save customized views and share them with others, directly from within cost analysis. Now you can!

An image showing how to use save customized views in cost analysis.

People with Cost Management Contributor (or greater) access can create shared views. You can create up to 50 shared views per scope.

Anyone can save up to 50 private views, even if they only have read access. These views cannot be shared with others directly in cost analysis, but they can be pinned to a dashboard or shared via URL so others can save a copy.

All views are accessible from the view menu. You'll see your private views first, then those shared across the scope, and lastly the built-in views which are always available.

Am image showing the view menu of all saved views, private and shared.

Need to share your view outside of the portal? Simply download the charts as an image and copy it into an email or presentation, as an example, to share it with your team. You'll see a slightly redesigned Export menu which now offers a PNG option when viewing charts. The table view cannot be downloaded as an image.

An image showing the export menu, for sharing views outside of the portal.

You'll also see a few small design changes to the filter bar in the preview:

  • The scope pill shows more of the scope name for added clarity
  • The view menu has been restyled based on its growing importance with saved views
  • The granularity and group by pickers are closer to the main chart to address confusion about what they apply to

This is just the first step. There's more to come. Try the preview today and let us know what you'd like to see next! We're excited to hear your ideas!

 

Viewing costs in different currencies

Every organization has its own unique setup and challenges. You may get a single Azure invoice or perhaps you need separate invoices per department. You may even be in a multi-national organization with multiple billing accounts in different currencies. Or perhaps you simply moved subscriptions between billing accounts in different currencies. Regardless of how you ended up with multiple currencies, you haven't had a way to view costs in the portal. Now you can!

When cost analysis detects multiple currencies, you'll have an option to switch between them, viewing costs in each currency individually. Today, this only shows charges for the selected currency – cost analysis is not converting currencies. For example, if you have two charges, one for $1 and another for £1, you can see either USD only ($1) or GBP only (£1). You cannot see $1+£1 in USD or GBP today. In the future, Azure Cost Management will convert costs into a single currency to show everything in USD (e.g. $2.27 in this case) and eventually in a currency you select (e.g. ¥243.43).

An image showing the currency type menu.

 

Manage EA departments and policies from the Azure portal

If you manage an Enterprise Agreement (EA), you're all too familiar with the Enterprise portal, which lets you to keep an eye on your usage, monetary commitment credits, and additional charges each month. Did you know you can also do this in the Azure portal? With richer reporting in cost analysis and finer-grained control with budgets, the Azure portal delivers even more capabilities to understand and control your costs.

Now, you can also create and manage your departments and policy settings from the Azure portal. Departments allow you to organize subscriptions and delegate access to manage account owners and policy settings allow you to enable or disable reservations, Azure Marketplace purchases, and Azure Cost Management for your organization. To ensure everyone in the organization can see and manage costs, make sure you enable account owners to view charges.

An image showing how to manage your departments and policy settings in the Azure portal.

Enabling account owners to view charges also ensures subscription users with RBAC access have visibility into their costs throughout the lifetime of their resources, can control spending with budgets, and can optimize their spending with cost-saving recommendations. Enabling cost visibility is critical to driving accountability throughout your organization. Once enabled, you can manage finer-grained access with the Cost Management Reader and Cost Management Contributor roles on any resource group, subscription, or management group. We recommend Cost Management Contributor to ensure everyone can create and share Azure Cost Management views and budgets across the resources and costs they have visibility to.

If you're still using the enterprise portal on a regular basis, we encourage you to give the Azure portal a shot. Simply go to the portal and click Cost Management + Billing in the list of favorites on the left.

And don't forget to plan your move from the key-based EA APIs (such as consumption.azure.com) to the latest UsageDetails API (version 2019-04-01-preview or newer). The key-based APIs will not be supported after your next EA renewal into Microsoft Customer Agreement (MCA) and switching to the UsageDetails API now will streamline this transition and minimize future migration work.

 

Expanded availability of resource tags in cost reporting

Tagging is the best way to organize and categorize your resources outside of the built-in management group, subscription, and resource group hierarchy. Add your own metadata and build custom reports using cost analysis. While most Azure resources support tags, some resource types do not. Here are the latest resource types which now support tags:

  • VPN gateways

Remember tags are a part of every usage record and are only available in Azure Cost Management reporting after the tag is applied. Historical costs are not tagged. Update your resources today for the best cost reporting.

 

Tag your resources with up to 50 tags

To effectively manage costs in a large organization, you need to map costs to reporting entities. Whether you're breaking down cost by organization, application, environment, or some other construct, resource tags are a great way to add that metadata and reuse it for cost, health, security, and compliance tracking and enforcement. But as your reporting needs change over time, you may have hit the 15 tag limit on resources. No more! You can now apply up to 50 tags to each resource!

To learn more about tag management and the benefits of tags, see "Use tags to organize your Azure resources".

 

Documentation updates

Lots of documentation updates! Here are a few you might be interested in:

Want to keep an eye on all documentation updates? Check out the Azure Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select "Edit" at the top of the doc and submit a quick pull request.

 

What's next?

These are just a few of the big updates from the last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming!

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks! And, as always, share your ideas and vote up others in the Azure Cost Management feedback forum.

Status on Visual Studio feature suggestions

$
0
0

Visual Studio receives over 500 feature suggestions from customers every month on the Developer Community website. Handling that amount is a huge effort and we’d like to share with you how we handle this volume and the steps that we take to respond to them all. What happens to suggestion tickets after they’re opened, how many make it into Visual Studio, and what happens to the rest? Let’s find out.

Let’s start with the breakdown of incoming suggestion tickets in the past 6 months and what state they are in today. We find that around 15% of the suggestions are challenging to act on, and they typically fall into the following buckets.

11% – Closed as duplicate
3% – Closed due to missing info from customer
1% – Closed because they were not suggestions for Visual Studio

We do our best to follow up with customers to get more information where we can and move them into the next stage. For example, when making a suggestion to add a command to a context menu, it is important for us to know which context menu you meant.

That leave us with 85% left which are currently moving their way through the system. Here is the status of those tickets currently in our system:

40% – Closed for a number of reasons (more info below)
20% – New, not yet processed or triaged
28% – Under review and gathering votes and comments
3% – Awaiting more info from customer
3% – On roadmap (under development)
6% – Completed and released

Now let’s dig in and see what’s behind those numbers.

From New to Under Review

We have a filtering system that automatically routes incoming suggestions to the appropriate team within the Visual Studio organization. Within my team, we have established a weekly process to triage these routed suggestions and review status. The process we follow looks like this:

  1. Does this bug belong to my team?
    • If not, move it to the right team
  2. Is the suggestion a duplicate of an existing suggestion?
    • If so, close it and transfers all votes to the original ticket (happens automatically)
  3. Does the suggestion contain all needed information?
    • If not, ask customer for more information
  4. Was this suggestion already completed or in active development?
    • If so, close it as either Completed or On Roadmap
  5. If it made it this far, mark it Under Review to gather votes and comments for 90 days

By following these steps, most suggestions end up Under Review as we gather more data, refine any repos or requirements. These make up over a quarter of all suggestions.

Every time someone adds a new comment to an existing ticket, we receive an email, so we know what’s going on with each ticket along the way, and can respond if needed.

Moving on from Under Review

Within 90 days, we attempt to address items that are still marked Under Review. Our options are:

  1. Mark it as Completed because we implemented the suggestion
  2. Mark it as On Roadmap because it’s in active development or will be very soon
  3. Close it because it didn’t get any votes and/or we’re not able to prioritize it

When we implement a suggestion, we mark it Completed or On Roadmap. Currently, approximately ~10% of the incoming suggestions go on to be implemented or added to the roadmap.

But what about the ones that don’t?

Reasons for closing suggestions

Most suggestions are good suggestions and it’s always painful to close them. Especially because a lot of them are some that we personally would like to see implemented. As developers, you know that time and resources are finite, which means we can’t implement all suggestions.

The reason we close suggestions is a mix of multiple factors, such as:

  1. It didn’t receive any votes after 90 days as Under Review
  2. It got a few votes, but an implementation will not fit within our available resources
  3. It involves areas in Visual Studio that see little usage by our customers
  4. It has negative side-effects such as degraded performance, accessibility etc.

Over a third of all suggestions end up closed due to one or more of the above reasons.

On the positive side, even for some suggestions that we close, we do move the capability into an experimental extension for Visual Studio. This allows us to lower the cost of delivering a quality product investment, and where we can draw more interest from the community.

Suggestion completed

6% of all actionable suggestion tickets end up marked as Completed. It may not sound like much, but it is about 1 suggestion per weekday. Let that sink in. Every single weekday, the Visual Studio engineering team implements a community submitted suggestion.

Before we implement a suggestion, we first write a spec for it if needed. Then we schedule the work item in a sprint for engineering to pick up. The implementation sometimes require work by multiple teams to coordinate, so they can each do their piece of the feature.

After automated test and compliance runs have finished, it’s time for code review before the code starts its journey toward the Visual Studio master branch. More automated testing runs and finally manually testing follows. After fixing all identified bugs, the completed suggestion makes its way to a Visual Studio Preview release for user testing and stabilization.

So, how do we decide to implement suggestions and how can you optimize the chances of your suggestion making it? We look at several things:

  1. Suggestions with many votes and continuous votes over time
  2. Suggestions in areas that see lots of usage by our customers
  3. Suggestions that are easier to implement
  4. Suggestions that would improve Visual Studio’s competitive advantage
  5. Well written suggestion with all relevant information in the description

A different way to think about it is to turn it around. Imagine someone wanted you to implement a feature in your product. It’s in the best interest of our product and customers to complete as many suggestions as possible, and we strive to do so.

The best times are when we get to make a lot of people happy with a feature implementation based on a suggestion.

We can must do better

We’ve gotten feedback that this process feels like a black box. Customers feel like they don’t get a response and they don’t know the status of their suggestions.

After submitting a suggestion, there is no transparency into the process, and it ends up closed without any good reason 6 months later. I end up feeling frustrated and angry. I don’t want to submit another suggestion just to be ignored. – Anonymous Visual Studio user

This is not acceptable. We must do better.

Some ideas that we are working on within the team are as follows. And, we welcome your feedback on what we might do more of to help you understand the process better.

First up, we want to be much more transparent about the process. That’s exactly what this blog post aims to achieve.

Secondly, we must be faster at responding to new suggestions. That means triaging them within the first week, so we can bring down the 20% of new untriaged suggestions to a minimum. It also means not leaving any suggestions to linger for months. This will add visibility into what is going on with the suggestions much earlier and throughout its various phases. We’ve made great progress with this in the past 6 months, but still have a bunch of open tickets to go.

Thirdly, we need to be better at giving reasons for closing tickets. Individually written by the program manager that closed them and not an automated response. As we’re getting better at handling the vast amount of incoming suggestions, this is where we’ll focus next.

Feedback

I hope this blog post helps shed light on the way we handle suggestion and how we plan to improve. Completing a suggestion every single weekday will hopefully encourage you to continue opening suggestion tickets.

In closing, we’d really like to hear your thoughts or questions. What could we do better and what do we do well? Let us know in the comments below.

The post Status on Visual Studio feature suggestions appeared first on The Visual Studio Blog.

Azure DevOps Roadmap update for 2019 Q3

$
0
0

Azure DevOps Roadmap update for 2019 Q3

As always, the Azure DevOps engineering team is working hard to deliver enhancements and new features across all our services. Recently we have been adding new capabilities at an unprecedented pace, including support for multi-stage YAML pipelines, Pipeline environments and Kubernetes integration, support for authenticating with GitHub identities, Python and Universal packages and public feeds in Azure Artifacts, new and updated integrations with Jira Software, Slack and Microsoft Teams, and much more.

We have also been making a renewed effort to include some smaller items each sprint which we categorize as “paper cuts”. These are minor to medium sized issues that can really help improve the users experience and are based on the amazing feedback provided through our Developer Community site.

Last week we updated the Features Timeline, take a look at the complete list of features for Q3. As we move into the second half of the calendar year, you can expect to see significant investments across all the services, but I wanted to call out some of my favorites from that list. Note that each feature links to our public roadmap project where you can find more details about each item and see its status.

Azure Pipelines:

Approvals in YAML pipelines

  • Approvals in YAML pipelines

    Instead of automatically moving a run from one stage to the next, you might want an approver to review it first. While approvals are a concept that already exists in Release pipelines, it is not yet available when defining pipelines with YAML documents. Config-as-code poses interesting challenges for where you specify approvals. We plan to make approvals a policy on the resource (agent pool, variable group, service connection, or secure file), and any stage that uses that resource will be paused for an approval.

  • Multi repository support for YAML pipelines

    Currently, you can trigger a pipeline from changes made in a single repository. With this feature, you will be able to trigger pipelines based on changes made in one of multiple repositories. For example, this is useful if you manage your code in one repository and the YAML file in a different repository.

  • Deployment job enhancements

    A deployment job is a special type of job that is used to deploy your app to an environment. We will enhance the strategies supported in deployment jobs to enable rolling, canary and blue-green deployments.

Azure Boards:

Boards Picklist

  • Customize system picklist values

    Customizing system picklists is a request with a high number of votes in the Developer Community. This quarter we will add this feature and let you to customize values on system fields such as Activity, Priority, Risk, etc.

  • Update work item notifications to be more flexible

    We’ll update the work item notifications so that they are more customizable and flexible. You will have the full subscription option, unsubscribed option and a customizable option. This gives you the choice to follow just certain events on the work item, such as state changes or assigned to field changes.

Azure Repos:

  • Add granularity to automatic reviewer policy

    We are adding granularity to the automatic reviewer policy so that you can set required reviewers at the group level. Today you can set a total required reviewer policy, but this is a global total. For example, this will let you set two people from a specific group as approvers.

  • Update work items on commits

    You will be able to update, and close work items through Git commits by using a syntax similar to “fixes #3245”.

  • Support push policies to block commits meeting certain criteria

    We are adding push policies that will allow admins to block pushes to a repository based on certain criteria. You will be able to set policies to block pushes where the commit author does not match the defined pattern. In addition, you will be able to block pushes where the push contains a file name that violates the defined pattern.

Azure Artifacts:

Updated Connect To experience

  • Simplified set-up and pipelines experiences

    We’re making a major update to the “Connect to feed” dialog that supports more modern tools and reduces the need to manually generate and store Personal Access Tokens on disk in order to use Azure Artifacts feeds. We will also release new package authentication tasks for Azure Pipelines that will allow you to securely configure both Azure Artifacts feeds and any other feeds you provide via service connections.

  • Billing management and cost controls

    Now that we’ve introduced consumption-based pricing for Azure Artifacts, we’ll be adding a set of views to help you understand your usage across feeds and upstream sources. From those views, we’re also adding a set of manual and automatic clean-up tools to help you control your costs.

Azure Test Plans:

  • Test Progress Report

    We are adding a test progress report to Test Plans. The report will be powered by analytics to reflect the summary, progress and drill down for the selected Test Plan.

Administration:

  • Policy to control and restrict new Azure DevOps organizations

    An organization in Azure DevOps is a mechanism for organizing and connecting groups of related projects. To help with governance, we are creating a new policy to control who in your enterprise can create new Azure DevOps organizations attached to your Azure Active Directory.

  • Export list of all Azure DevOps users paid under an Azure subscription

    We will let you pull a full list of paid users under one Azure subscription. This export will provide a list of all the users under the same Azure subscription, the organizations and projects they can access, when they last accessed Azure DevOps, and when they were first added. You can use the information provided by the export to see which project your users have access to. This can be useful if you need to split costs based on projects.

We appreciate your feedback, which helps us prioritize. If you have new ideas or changes you’d like to see, provide a suggestion on the Developer Community, vote for an existing one, or contact us on Twitter.

The post Azure DevOps Roadmap update for 2019 Q3 appeared first on Azure DevOps Blog.

New to Microsoft 365 in July—updates to Azure AD, Microsoft Teams, Outlook, and more

HttpRepl: A command-line tool for interacting with RESTful HTTP services

$
0
0

The ASP.NET team has built a command-line tool called HttpRepl. It lets you browse and invoke HTTP services in a similar way to working with files and folders. You give it a starting point (a base URL) and then you can execute commands like “dir” and “cd” to navigate your way around the API:

C:> dotnet httprepl http://localhost:65369/
(Disconnected)~ set base http://localhost:65369
Using swagger metadata from http://localhost:65369/swagger/v1/swagger.json

http://localhost:65369/~ dir
.        []
Fruits   [get|post]
People   [get|post]

http://localhost:65369/~ cd People
/People    [get|post]

http://localhost:65369/People~ dir
.      [get|post]
..     []
{id}   [get]

Once you have identified the API you are interested in, you can use all the typical HTTP verbs against it. Here is an example of calling GET on http://localhost:65369/People as a continuation from before:

http://localhost:65369/People~ get
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Wed, 24 Jul 2019 20:33:07 GMT
Server: Microsoft-IIS/10.0
Transfer-Encoding: chunked
X-Powered-By: ASP.NET

[
  {
    "id": 1,
    "name": "Scott Hunter"
  },
  {
    "id": 0,
    "name": "Scott Hanselman"
  },
  {
    "id": 2,
    "name": "Scott Guthrie"
  }
]

Right now HttpRepl is being shipped as a .NET Core Global Tool, which means all you have to do to get it is run the following command on a machine with the .NET Core SDK installed:

C:> dotnet tool install -g Microsoft.dotnet-httprepl --version “3.0.0-*”

The ASP.NET team built HttpRepl for the purpose of exploring and testing APIs. The idea was to make the experience of exploring and testing APIs through a command-line more convenient. What do you think about HttpRepl and what other uses do you envision for it? We would love to hear your opinion, please leave us a comment below or visit the project on GitHub. And for those wondering, HttpRepl’s official ship date is expected to align with .NET Core 3.0 GA.

Configure Visual Studio Code to launch HttpRepl on debug

You can configure Visual Studio to launch HttpRepl when debugging (along with your web app) by creating a new launch configuration as follows:

"version": "0.2.0",
  "compounds": [
    {
      "name": ".NET Core REPL",
      "configurations": [
        ".NET Core Launch (web)",
        "httprepl"
      ]
    }
  ],
  "configurations": [
    {
      "name": "httprepl",
      "type": "coreclr",
      "request": "launch",
      "program": "dotnet",
      "args": ["httprepl", "http://localhost:5000"],
      "cwd": "${workspaceFolder}",
      "stopAtEntry": false,
      "console": "integratedTerminal"
    },
    {
      "name": ".NET Core Launch (web)",
      "type": "coreclr",
      "request": "launch",
      "preLaunchTask": "build",
      // If you have changed target frameworks, make sure to update the program path.
      "program": "${workspaceFolder}/bin/Debug/netcoreapp3.0/api.dll",
      "args": [],
      "cwd": "${workspaceFolder}",
      "stopAtEntry": false,
      // Enable launching a web browser when ASP.NET Core starts. For more information: https://aka.ms/VSCode-CS-LaunchJson-WebBrowser
      "serverReadyAction": {
        "action": "openExternally",
        "pattern": "^\s*Now listening on:\s+(https?://\S+)"

      },
      "env": {
        "ASPNETCORE_ENVIRONMENT": "Development"
      },
      "sourceFileMap": {
        "/Views": "${workspaceFolder}/Views"
      }
    }

Configure Visual Studio for Windows to launch HttpRepl on F5

You can configure Visual Studio to automatically launch HttpRepl when you F5 a project with the following simple steps:

The .exe for HttpRepl on Windows can be found in the following location:

%USERPROFILE%.dotnettoolsdotnet-httprepl.exe

Don’t forget to select it from the menu after adding it:

Next time you F5 your project, Visual Studio will automatically launch HttpRepl with the appropriate base URL (same URL that would have been passed to a browser, controlled through launchsettings):

Note: We are currently working on integrating HttpRepl into Visual Studio, which will give you an out-of-the box and more refined experience.

Configure Visual Studio for Mac to launch HttpRepl as a Custom Tool

In Visual Studio for Mac, you can configure a Custom Tool to open a new Terminal window and start httprepl. To configure this, go to Tools>Edit Custom Tools…

This will bring you to the External Tools dialog where you can add a new tool. To get started click the Add button to add a new tool. Here you will configure a new tool to launch a new Terminal instance and start the httprepl tool. Fill in the dialog with the following values.

  • Title: dotnet httprepl
  • Command: osascript
  • Arguments: -e ‘tell application “Terminal” to activate’ -e ‘tell application “Terminal”
    to do script “dotnet-httprepl”‘
  • Working directory: ${ProjectDir}

See the image below showing this new tool:

After clicking OK a new tool will appear in the Tools menu. To test your application with httprepl, start your application with Run>Start Debugging (or Run>Start without Debugging) and then start the httprepl with the new tool in the Tools menu:

When you invoke the tool a new Terminal window should appear in the foreground. From here you can set the base url to that of the api that you would like to test with set base. For example, see the image below that shows set base was executed and a get request will be executed next:

Give us feedback

It’s not HTTPie, it’s not Curl, but it’s also not PostMan. It’s something that you run and stays running and its aware of its current context. We find this experience valuable, but ultimately what matters the most is what you think. Please let us know your opinion by leaving comments below or on GitHub.

The post HttpRepl: A command-line tool for interacting with RESTful HTTP services appeared first on ASP.NET Blog.

Docker Desktop for WSL 2 integrates Windows 10 and Linux even closer

$
0
0

Being able to seamlessly run Linux on Windows is making a bunch of common development tasks easier. When you're running WSL2 (Windows Subsystem for Linux 2) in a version of Windows 10 greater than build 18945, a BUNCH of useful and interesting scenarios light up and stuff just works.

Docker for Windows (download the Docker Desktop for WSL 2 Tech preview here) is great, but it has historically worked on Windows by creating a Hyper-V virtual machine called Moby that is visible within the Hyper-V client. It's a utility VM, but it's one you're aware of.

Docker for Windows using WSL2

However, if WSL2 runs a real Linux kernel in Windows 10 and it's managing a virtual machine platform underneath (and not visible to) Hyper-V client tools, then why not just let WSL2 handle containers for us?

That's exactly what the Docker Desklop WSL 2 Tech Preview aims to do. And just like WSL 2, it's fast.

...the time required to start a Docker daemon after a cold start is significantly faster. It takes less than 2 seconds to start the Docker daemon when compared to tens of seconds in the current version of Docker Desktop.

Once you've got a Linux (Ubuntu or the like) set up in WSL 2, you can right click on Docker Deskop and click "WSL 2 Tech Preview." This is a goofy and not-super-intuitive UI for now but it's a moment in time.

Click WSL 2 Tech Preview

Then you just hit Start.

NOTE: If you've already installed Docker within WSL 2 at the command line, stop it and let Docker Desktop manage its lifecycle.

Here's the beginnings of their UI.

Docker for WSL2

When I drop out to PowerShell/CMD on Windows I can run "docker context ls."

C:UsersScottDesktop> docker context ls    

NAME DESCRIPTION DOCKER ENDPOINT
default Current DOCKER_HOST based configuration npipe:////./pipe/docker_engine
wsl * Docker daemon hosted in WSL 2 npipe:////./pipe/docker_wsl

You can see there's two contexts, and I've run "docker context use wsl" and that's now my default.

Here is docker images from Ubuntu, and again from Windows (in PowerShell Core). They are the same!

Docker images in Ubuntu
Docker images from Powershell

Sweet. Here I am using PowerShell Core (which is open source and cross-platform, natch) to manage my builds which are themselves cross-platform and I can run both a docker build or a metal build on both Windows or Linux, all seamlessly on the same box.

building docker images

Also note, Simon from Docker points out "We are using a non default dataroot in this mode to avoid corrupting a datastore you use without docker desktop in case something goes wrong. Stopping the docker desktop wsl daemon and restarting the one you installed manually should bring everything back." I noticed this because my "Windows Docker" and my original WSL2 docker had a list of images that I naively expected to be available here, but this is a new context and new dataroot so you may need to fetch images again in this new world if you're have been historically an active docker user.

So far I'm super impressed. Linux on the Windows Desktop feels right. It's Peanut Butter and Chocolate.


Sponsor: Looking for a tool for performance profiling, unit test coverage, and continuous testing that works cross-platform on Windows, macOS, and Linux? Check out the latest JetBrains Rider!



© 2019 Scott Hanselman. All rights reserved.
     

Moving your VMware resources to Azure is easier than ever

$
0
0

Back in April we announced the Azure VMware Solution to deliver a comprehensive VMware environment allowing you to run native VMware-based workloads on Azure. It’s a fully managed platform as a service (PaaS) that includes vSphere, vCenter, vSAN, NSX-T, and corresponding tools.

The VMware environment runs natively on Azure’s bare metal infrastructure, so there’s no nested virtualization and you can continue using your existing VMware tools. There’s no need to worry about operating, scaling, or patching the VMware physical infrastructure or re-platforming your virtual machines. The other benefit of this solution is that you can stretch your on-premises subnets into Azure. It’s like connecting another location to your VMware environment, only that location happens to be in Azure.

We’ve recently published a new episode of Microsoft Mechanics featuring Markus Hain, Senior Program Manager from the Azure engineering team. In this episode, Markus walks through the experience of coming from an on-premises VMware vSphere environment, provisioning an Azure VMware Solution private cloud, getting both environments to communicate, and what you can do once the service is up and running.

Snapshot of YouTube video with play button for Run VMware in Azure tutorial

Beyond building out and configuring the environment, Markus explains how the hybrid networking works to connect VMware sites and how the service translates bidirectional traffic between virtual networks used in Azure with virtual LANs (VLANs) used in VMware.

Hybrid networking connections between VMware VLANs and Microsoft Azure virtual networks

Once the services are running, it’s easy to vMotion as you normally would between VMware sites. We show a simple vMotion migration to move virtual machine workloads into Azure. As your VMware workloads start to run in Azure you can take advantage of integrating Azure services seamlessly to existing VMware workloads. For example, your developers can create new VMware virtual machines inside the Azure portal leveraging the same VMware templates from the on-premises environment, and ultimately running those virtual machines in your VMware private cloud in Azure.

Configuring a virtual machine via Azure Resource Manager templates to run in Azure VMware Solution

Virtual machines created in the Azure portal will be visible, accessible, and run in the VMware vSphere environment. You have the flexibility to manage those resources as you normally would in vSphere, Azure, or both. The environments are deeply integrated at the API level to ensure that what you see in either experience is synchronized. This enables hybrid management, as well as allowing your developers to manage both Azure and VMware resources using a single Azure Resource Manager template.

Virtual machine created in the Azure portal visible in VMware vSphere environment

What’s more, you can monitor those virtual machines like you would Azure infrastructure as a service (IaaS) virtual machines and connect them to the broad set of resources across data, compute, networking, storage, and more. In fact, Markus shows how you can configure an application gateway running in Azure to load balance inbound traffic to your virtual machines running in the Azure VMware Solution. Since this is a truly hybrid and deeply integrated set of services, there’s really no limit to how you architect your apps and solutions, and like a native cloud service, you can benefit from the elasticity of the number of VMware nodes you’ll need to match seasonal or otherwise variable demand.

Right now, the Azure VMware Solution by CloudSimple is available in East US and West US regions. Western Europe is coming next, and we’ll add more regions over the coming months. To get started, just search for “vmware” while signed into the Azure portal and provision the service, nodes, and virtual machines. You’ll then be on your way to running your own private cloud in Azure!

Searching for “vmware” in the authenticated Azure portal

For more information, check out our Azure VMware Solution site.

New Azure Blueprint simplifies compliance with NIST SP 800-53

$
0
0

To help our customers manage their compliance obligations when hosting their environments in Microsoft Azure, we are publishing a series of blueprint samples built in to Azure. Our most recent release is the NIST SP 800-53 R4 blueprint that maps a core set of Azure Policy definitions to specific NIST SP 800-53 R4 controls. For US governmental entities and others with compliance requirements based on NIST SP 800-53, this blueprint helps customers proactively manage and monitor compliance of their Azure environments. 

The free Azure Blueprints service helps enable cloud architects and information technology groups to define a repeatable set of Azure resources that implements and adheres to an organization’s standards, patterns, and requirements. Blueprints may help speed the creation of governed subscriptions, supporting the design of environments that comply with organizational standards and best practices and scale to support production implementations for large-scale migrations.

Azure leads the industry with more than 90 compliance offerings that meet a broad set of international and industry-specific compliance standards. This puts Microsoft in a unique position to help ease our customers’ burden to meet their compliance obligations. In fact, many of our customers, particularly those in regulated industries, have expressed strong interest in being able to leverage our internal compliance practices for their environments with a service that maps compliance settings automatically. The Azure Blueprints service is our natural response to that interest.  Customers are ultimately responsible for meeting the compliance requirements applicable to their environments and must determine for themselves whether particular information helps meet their compliance needs.

The US National Institute of Standards and Technology (NIST) publishes a catalog of security and privacy controls, Special Publication (SP) 800-53, for all federal information systems in the United States (except those related to national security). It provides a process for selecting controls to protect organizations against cyberattacks, natural disasters, structural failures, and other threats.

The NIST SP 800-53 R4 blueprint provides governance guardrails using Azure Policy to help customers assess specific NIST SP 800-53 R4 controls. It also enables customers to deploy a core set of policies for any Azure-deployed architecture that must implement these controls.

NIST SP 800-53 R4 control mappings provide details on policies included within this blueprint and how these policies address various NIST SP 800-53 R4 controls. When assigned to an architecture, resources are evaluated by Azure Policy for non-compliance with assigned policies. These control mappings include:

  • Account management. Helps with the review of accounts of that may not comply with an organization’s account management requirements.
  • Separation of duties. Helps in maintaining an appropriate number of Azure subscription owners.
  • Least privilege. Audits accounts that should be prioritized for review.
  • Remote access. Helps with monitoring and control of remote access.
  • Audit review, analysis, and reporting. Helps ensure that events are logged and enforces deployment of the Log Analytics agent on Azure virtual machines.
  • Least functionality. Helps monitor virtual machines where an application white list is recommended but has not yet been configured.
  • Identification and authentication. Helps restrict and control privileged access.
  • Vulnerability scanning. Helps with the management of information system vulnerabilities.
  • Denial of service protection. Audits if the Azure DDoS Protection standard tier is enabled.
  • Boundary protection. Helps with the management and control of the system boundary.
  • Transmission confidentiality and integrity. Helps protect the confidentiality and integrity of transmitted information.
  • Flaw remediation. Helps with the management of information system flaws.
  • Malicious code protection. Helps the management of endpoint protection, including malicious code protection.
  • Information system monitoring. Helps with monitoring a system by auditing and enforcing logging across Azure resources.

At Microsoft, we will continue this commitment to helping our customers leverage Azure in a secure and compliant manner. Over the next few months we plan to release more new built-in blueprints for HITRUST, FedRAMP, NIST SP 800-171, the Center for Internet Security (CIS) Benchmark, and other standards.

If you would like to participate in any early previews please sign up. In addition, learn more about the Azure NIST SP 800-53 R4 blueprint.

Introducing Azure Dedicated Host

$
0
0

We are excited to announce the preview of Azure Dedicated Host, a new Azure service that enables you to run your organization’s Linux and Windows virtual machines on single-tenant physical servers. Azure Dedicated Hosts provide you with visibility and control to help address corporate compliance and regulatory requirements. We are extending Azure Hybrid Benefit to Azure Dedicated Hosts, so you can save money by using on-premises Windows Server and SQL Server licenses with Software Assurance or qualifying subscription licenses. Azure Dedicated Host is in preview in most Azure regions starting today.

Create a dedicated host

You can use the Azure portal to create an Azure Dedicated Host, host groups (a collection of hosts), and to assign Azure Virtual Machines to hosts during the virtual machine (VM) creation process. 

Visibility and control

Azure Dedicated Hosts can help address compliance requirements organizations may have in terms of physical security, data integrity, and monitoring. This is accomplished by giving you the ability to place Azure VMs on a specific and dedicated physical server. This offering also meets the needs of IT organizations seeking host-level isolation.

Azure Dedicated Hosts provide visibility over the server infrastructure running your Azure Virtual Machines. They allow you to gain further control over:

  • The underlying hardware infrastructure (host type)
  • Processor brand, capabilities, and more 
  • Number of cores
  • Type and size of the Azure Virtual Machines you want to deploy

You can mix and match different Azure Virtual Machine sizes within the same virtual machine series on a given host.

With an Azure Dedicated Host, you can control all host-level platform maintenance initiated by Azure (e.g., host OS updates). An Azure Dedicated Host gives you the option to defer host maintenance operations and apply them within a defined maintenance window, 35 days. During this self-maintenance window, you can apply maintenance to your hosts at your convenience, thus gaining full control over the sequence and velocity of the maintenance process.

Licensing cost savings

We now offer Azure Hybrid Benefit for Windows Server and SQL Server on Azure Dedicated Hosts, making it the most cost-effective dedicated cloud service for Microsoft workloads.

  • Azure Hybrid Benefit allows you to use existing Windows Server and SQL Server licenses with Software Assurance, or qualifying subscription licenses, to pay a reduced rate on Azure services. Learn more by referring to the Azure Hybrid Benefit FAQ.
  • We are also expanding Azure Hybrid Benefit so you can take advantage of unlimited virtualization for Windows Server and SQL Server with Azure Dedicated Hosts. Customers with Windows Server Datacenter licenses and Software Assurance can use unlimited virtualization rights in Azure Dedicated Hosts. In other words, you can deploy as many Windows Server virtual machines as you like on the host, subject only to the physical capacity of the underlying server. Similarly, customers with SQL Server Enterprise Edition licenses and Software Assurance can use unlimited virtualization rights for SQL Server on their Azure Dedicated Hosts.
  • Consistent with other Azure services, customers will get free Extended Security Updates for Windows Server 2008/R2 and SQL Server 2008/R2 on Azure Dedicated Host. Learn more about how to prepare for SQL Server and Windows Server 2008 end of support.

Azure Dedicated Hosts allow you to use other existing software licenses, such as SUSE or RedHat Linux. Check with your vendors for detailed license terms.

With the introduction of Azure Dedicated Hosts, we’re updating the outsourcing terms for Microsoft on-premises licenses to clarify the distinction between on-premises/traditional outsourcing and cloud services. For more details about these changes, read the blog “Updated Microsoft licensing terms for dedicated hosted cloud services.” If you have any additional questions, please reach out to your Microsoft account team or partner.

Getting started

The preview is available now. Get started with your first Azure Dedicated Host.

You can deploy Azure Dedicated Hosts with an ARM template or using CLI, PowerShell, and the Azure portal. For a more detailed overview, please refer to our website and the documentation for both Windows and Linux.

Frequently asked questions

Q: Which Azure Virtual Machines can I run on Azure Dedicated Host?

A: During the preview period you will be able to deploy Dsv3 and Esv3 Azure Virtual Machine series. Support for Fsv2 virtual machines is coming soon. Any virtual machine size from a given virtual machine series can be deployed on an Azure Dedicated Host instance, subject to the physical capacity of the host. For additional information please refer to the documentation.

Q: Which Azure Disk Storage solutions are available to Azure Virtual Machines running on an Azure Dedicated Host?

A: Azure Standard HDDs, Standard SSDs, and Premium SSDs are all supported during the preview program. Learn more about Azure Disk Storage.

Q: Where can I find pricing and more details about the new Azure Dedicated Host service?

A: You can find more details about the new Azure Dedicated Host service on our pricing page.

Q: Can I use Azure Hybrid Benefit for Windows Server/SQL Server licenses with my Azure Dedicated Host?

A: Yes, you can lower your costs by taking advantage of Azure Hybrid Benefit for your existing Windows Server and SQL Server licenses with Software Assurance or qualifying subscription licenses. With Windows Server Datacenter and SQL Server Enterprise Editions, you get unlimited virtualization when you license the entire host and use Azure Hybrid Benefit. As a result, you can deploy as many Windows Server virtual machines as you like on the host, subject to the physical capacity of the underlying server. All Windows Server and SQL Server workloads in Azure Dedicated Hosts are also eligible for free Extended Security Updates for Windows Server and SQL Server 2008/R2.

Q: Can I use my Windows Server/SQL Server licenses with dedicated cloud services?

A: In order to make software licenses consistent across multitenant and dedicated cloud services, we are updating licensing terms for Windows Server, SQL Server, and other Microsoft software products for dedicated cloud services. Beginning October 1, 2019, new licenses purchased without Software Assurance and mobility rights cannot be used in dedicated hosting environments in Azure and certain other cloud service providers. This is consistent with our policy for multitenant hosting environments. However, SQL Server licenses with Software Assurance can continue to use their licenses on dedicated hosts with any cloud service provider via License Mobility, even if licenses were purchased after October 1, 2019. Customers may use on-premises licenses purchased before October 1, 2019 on dedicated cloud services. For more details regarding licensing, please read the blog “Updated Microsoft licensing terms for dedicated hosted cloud services.”

For additional information, please refer to the Azure Dedicated Host website and the Azure Hybrid Benefit page.

Theming in Visual Studio just got a lot easier

$
0
0

Sometimes the default themes for Visual Studio just aren’t enough. Lucky for us, we’ve just redesigned the process of creating and importing custom themes.

One of the only ways to import themes was to download the older Color Theme Editor extension. If you were brave enough to create your own theme, you had to edit elements one by one from an unorganized list of 3,000+ vaguely named color tokens.

This summer, a group of interns has developed a newly released Color Theme Designer extension, and we’re hoping that making custom themes just got a whole lot simpler for beginner and advanced designers alike.

A new theming experience

Finding and using a new theme is now as easy as downloading any other extension. Just check out the new Themes category in the Visual Studio Marketplace to download themes that other users have published.

For theme designers, the new Color Theme Designer comes with a more familiar startup workflow and a simplified design.

We’re introducing ‘Quick start,’ a feature that lets you create a custom theme in minutes by picking three base colors. For more specific customizations, the redesigned ‘Common elements’ and ‘All elements’ tabs allow you to edit all color tokens individually. The new ‘Preview’ mode lets you see edits real-time before fully saving and applying your theme. Your final product will be a Visual Studio extension that puts your theme alongside the default themes under Tools -> Options.

Let’s create a theme!

1. Set up your theme project

If you’re ready to get started making your first theme (or theme pack!), download the Color Theme Designer and create a new ‘VSTheme Project’ in Visual Studio.

The new project will contain an empty .vstheme file. Opening the file will prompt you to pick a base theme.

The base theme you select will fill the theme file with color tokens that you can later customize.

2. Start customizing

Only got 15 minutes?

In ‘Quick start,’ you select three colors which will generate a full palette of shades that set the majority of colors in the theme. A miniature preview displays how the colors will generally appear in Visual Studio.

Want to dive in deeper?:

‘Common elements’ has roughly 100 of the most commonly edited color tokens organized under five main categories. Next to each row of tokens, a snippet preview will update as you change the colors.

‘All elements’ shows every editable color token in a list that can be grouped by category or color value. Right-clicking tokens gives you the option to modify the hue, saturation, and lightness of the selection. If you can’t find a token that you are looking for, try filtering by a hex value or key words in the token name.

If you’d like to add additional theme files to your project, right-click to Add -> New Item -> VSTheme File.

Try clicking ‘Preview’ while customizing your theme to see your edits applied temporarily to the entire IDE!

3. Install your theme

When you’re finished customizing your theme, click ‘Apply’ if you’d like to start using it immediately. Your theme will appear under Tools -> Options -> General in the Color Themes dropdown alongside the default Visual Studio themes. To remove your theme, go to the Manage Extensions dialog and simply uninstall it like any other extension.

Otherwise, build your theme project and locate. the .vsix file in the project’s output directory (‘bin’ folder) to install the theme extension. Use the .vsix file to share your theme with friends or publish it to the Visual Studio Marketplace!

In closing

What do you think of the new Color Theme Designer? Are there any features you would like to see included in the future? Please let us know your thoughts in the comments below.

We hope you feel inspired to download the new extension and begin making your own color themes, but if not, check out the Visual Studio marketplace to download themes that other users have made!

 

This blog post was written by:

Prasiddhi Jain
University of North Carolina
Anna Owens
North Carolina State University
James Fuller
North Carolina A&T
Prasiddhi Jain

The post Theming in Visual Studio just got a lot easier appeared first on The Visual Studio Blog.

Data on demand: Azure SQL Database in serverless mode

$
0
0

Azure SQL Database has a new “serverless” mode in preview that eliminates compute costs when not in use. In this post, I’ll show how you can set up a serverless database instance, and access data stored in it from R.

I’m working on a demo that I’ll be giving at several upcoming conferences, and for which I’ll be needing data in a database. Normally, I’d use a database installed on my local machine or in a virtual machine in the cloud, but this time I decided to go a different route: serverless.

I need the database to be around for a few months, but I’ll only be accessing it occasionally while I develop the demo and, later, present it live a few times. I need the database to be fairly large and fast (both of which rule out installing it on my laptop), and I’d prefer not to have to pay so much for the cloud resources while I’m not using it. (Yes, as a Microsoft employee I do have access to Azure services, but our department is cross-charged for our consumption and our spend is scrutinized like any other expense.) I considered using specialized cloud-data services, but for this talk I needed something that looked and felt like a traditional database.

I could always run a database in a virtual machine, but this time I decided to try Azure SQL Database. Azure SQL Database runs just like an ordinary SQL Server instance on a named server, but without the need to create or manage any associated VM. Even better, I discovered that it now has a “serverless” pricing tier, which gets me several benefits:

  • It will automatically scale its available compute resources up to the maximum number of vCores (virtual cores) I specify. The more cores in use, the more I am charged for compute. A minimum compute capacity (again, as specified) will always be provided while the database is running.
  • If the database is inactive for a period I specify, it will automatically be paused. While paused, I only pay for storage, but not for any compute. When I next need to access data, the database automatically restarts on demand.

The chart below illustrates this process for an Azure SQL Database with a minimum of 1 vCores and a maximum of 4 vCores; the green lines show when and at what rate compute cycles are billed, depending on the actual compute demand (orange line) on the database.

Serverless-billing

Creating the database

Setting up the Azure SQL Database instance is pretty simple, and covered in detail here. From the Azure Portal, create a “SQL Database” instance (in the Databases section) and watch out for these steps during the creation process:

  • You will be asked to choose a name for your new server: choose carefully here, as that will form the public URL you will use to access the server later.
  • To set up serverless operation, click the “Configure database” option under “Compute + storage” and choose the Serverless compute tier.

Compute-storage

  • With the serverless compute tier selected, you will then configure the capacity and performance of the database. These choices will determine the running cost.
  • The maximum and minimum vCores determines the processing speed and memory allocated to the database. The default setting for minimum vCores is 0.5, and that’s the best choice if you want to minimize operating costs. You can change these settings after the database is created, too.

Sizing-options

  • The Auto-pause delay determines how long it takes to suspend the database when it’s not being used. I chose two hours (instead of 1, the minimum) to give myself enough leeway between testing my demo and actually delivering the demo. That two hours of delay before each auto-pause will cost me $0.31, but means I can access the database during that period with no start-up delay. (This setting is reconfigurable after you create the database, and you can also switch off auto-pause later if needed.)
  • Finally, you’ll need to choose the maximum capacity of your database. You pay for storage even when the database isn’t in use, so this will be your biggest factor in determining costs. You can adjust this setting up or down later, and the storage costs will adjust accordingly.

With the setup described above, this instance will cost me $3.59 per month for storage, plus at most $1.25 per active hour for compute. (In practice though I’ll be charged a tiny fraction of the compute cost, as this database will spend most of its time paused. And even when I am actively using it, it will spend most of that time consuming fewer than the maximum 4 vCores.

One last thing while you’re setting up the database: in the Additional Settings tab, you have the option of pre-populating the database with data from a SQL Server backup, or with the AdventureWorks data: a sample data set from a fictitious retail store. That’s the data I’ll be using in the example to follow.

Adventureworks-sample

Enabling remote access to the database

After setting up the database as described above, the first thing you’ll need to do is to open up the firewall to allow remote access. Initially, the firewall is configured to deny all remote access, so you’ll need to add the IP address of any devices that need to access the database remotely. The details are described here, but a simple way to get started is to click “Set Server Firewall” in the Overview page for the database in the Azure Portal, and click “Add Client IP”. This adds the IP address of the computer you’re currently using.

Firewall-settings

Installing Drivers

You will also need to install a driver for your client machine to access the database. There are various driver options available, but I’ll be using ODBC. Windows 10 comes with a standard ODBC driver installed (it appears as “SQL Server”), but if you’re on Mac or Linux or want best results in Windows, I recommend installing the 64-bit (x64) version of Microsoft ODBC Driver 13.1 for SQL Server. If you use a different driver than that one, you’ll need to substitute its name in the connection string you get in next section.

Accessing data in Azure SQL Database from R

Now that the database is set up and the firewall open, you can access it like any other SQL Server database. I’ll be doing my demo using R, so let’s use that as an example client accessing the data remotely.

To illustrate how this works, let’s look at setting up an Azure SQL Database instance in serverless mode, configuring it with some sample data, and access the data from a client application: R.

To connect to the database with R, we’ll use the DBI package (an R Consortium project that provides a generalized interface to databases) and the odbc package from RStudio that provides the ODBC interface for SQL Server:

library(DBI)
library(odbc)

This is a good time to check you have a driver available to access the database, with:

> sort(unique(odbcListDrivers()[[1]]))
[1] "ODBC Driver 13 for SQL Server" "ODBC Driver 17 for SQL Server"
[3] "SQL Server" 

I recommend using "ODBC Driver 13 for SQL Server" (you can download it here), but if you only have a generic ODBC driver (like “SQL Server”, which comes installed as standard on Windows 10) it will most likely work (but possibly with reduced efficiency).

With a suitable driver available, you can create a connection to the database. The easiest way to do this is to visit your database in the Azure Portal and click “show database connection strings” in the Overview pane, and choose the ODBC option. I’ve blacked out the server name, database name, and admin user ID below: you select those when you first create the database.

Connection-string

Copy that string and replace {your_password_here} with the password you selected during setup (without the braces). Now you can create the connection in R:

constr 

(Note: if you get the error message of the form Login timeout expired or TCP Provider: Timeout error it means the database has paused for inactivity. In that case, wait about 20 seconds and run the dbConnect call again. Sadly, the Connection Timeout setting doesn’t seem to help with this.)

Now, you have everything you need to access the data from R. If you’re using RStudio, you can use the Connections pane to explore the available tables, or even click on the View Table icon to take a look at the data in the first few rows:

Rstudio-screenshot

If you know the T-SQL query you want to run on the data, you can do that with dbQuery and return the results as an R data frame. For example, this query provides the product category for each product in the Product table:

dbGetQuery(con,'
SELECT pc.Name as CategoryName, p.name as ProductName
FROM [SalesLT].[ProductCategory] pc
JOIN [SalesLT].[Product] p
ON pc.productcategoryid = p.productcategoryid;
')

If you’re not comfortable with SQL, you can use the dbplyr package to write the queries for you. In the example below, even though it’s using standard dplyr syntax, all of the computations are taking place in the Azure SQL Database. The results only returned to R on the collect() call, which means that you can calculate on large data in the database and return only the summarized results to R. For example, this code returns the top 5 products in the AdventureWorks sales data by total sales:

library(dplyr)
library(dbplyr)
sales % 
 group_by(ProductID) %>%
 summarise(Total_Sales = sum(LineTotal, na.rm=TRUE)) %>%
 arrange(desc(Total_Sales)) %>%
 head(5) %>%
 collect()

Trying it Out

The serverless compute tier for Azure SQL Database is available in public preview now. To learn more about setting up the database, the Microsoft Learn module Provision an Azure SQL database to store application data is a good place to start. All you need is an Azure subscription, and if you don’t have one already you can use this link to sign up and also get $200 in free credits to use on anything you like, including Azure SQL Database.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>