Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

How Skype modernized its backend infrastructure using Azure Cosmos DB – Part 2

$
0
0

This is a three-part blog post series about how organizations are using Azure Cosmos DB to meet real world needs, and the difference it’s making to them. In part 1, we explored the challenges Skype faced that led them to take action. In this post (part 2 of 3), we examine how Skype implemented Azure Cosmos DB to modernize its backend infrastructure. In part 3, we’ll cover the outcomes resulting from those efforts.

Note: Comments in italics/parenthesis are the author's.

The solution

Putting data closer to users

Skype found the perfect fit in Azure Cosmos DB, the globally distributed NoSQL database service from Microsoft. It gave Skype everything needed for its new People Core Service (PCS), including turnkey global distribution and elastic scaling of throughput and storage, making it an ideal foundation for distributed apps like Skype that require extremely low latency at global scale.

Initial design decisions

Prototyping began in May 2017. Some early choices made by the team included the following:

  • Geo-replication: The team started by deploying Azure Cosmos DB in one Azure region, then used its pushbutton geo-replication to replicate it to a total of seven Azure regions: three in North America, two in Europe, and two in the Asia Pacific (APAC) region. However, it later turned out that a single presence in each of those three geographies was enough to meet all SLAs.
  • Consistency level: In setting up geo-replication, the team chose session consistency from among the five consistency levels supported by Azure Cosmos DB. (Session consistency is often ideal for scenarios where a device or user session is involved because it guarantees monotonic reads, monotonic writes, and read-your-own-writes.)
  • Partitioning: Skype chose UserID as the partition key, thereby ensuring that all data for each user would reside on the same physical partition. A physical partition size of 20GB was used instead of the default 10GB size because the larger number enabled more efficient allocation and usage of request units per second (RU/s)—a measure of pre-allocated, guaranteed database throughput. (With Azure Cosmos DB, each collection must have a partition key, which acts as a logical partition for the data and provides Azure Cosmos DB with a natural boundary for transparently distributing it internally, across physical partitions.)

Event-driven architecture based on Azure Cosmos DB change feed

In building the new PCS service, Skype developers implemented a micro-services, event-driven architecture based on change feed support in Azure Cosmos DB. Change feed works by “listening” to an Azure Cosmos DB container for any changes and outputting a sorted list of documents that were changed, in the order in which they were modified. The changes are persisted, can be processed asynchronously and incrementally, and the output can be distributed across one or more consumers for parallel processing. (Change Feed in Azure Cosmos DB is enabled by default for all accounts, and it does not incur any additional costs. You can use provisioned RU/s to read from the feed, just like any other operation in Azure Cosmos DB.)

“Generally, an event-driven architecture uses Kafka, Event Hub, or some other event source,” explains Kaduk. “But with Azure Cosmos DB, change feed provided a built-in event source that simplified our overall architecture.”

To meet the solution’s audit history requirements, developers implemented an event sourcing with capture state pattern. Instead of storing just the current state of the data in a domain, this pattern uses an append-only store to record the full series of actions taken on the data (the “event sourcing” part of the pattern), along with the mutated state (i.e. the “capture state”). The append-only store acts as the system of record and can be used to materialize domain objects. It also provides consistency for transactional data, and maintains full audit trails and history that can enable compensating actions.

Separate read and write paths and data models for optimal performance

Developers used the Command and Query Responsibility Segregation (CQRS) pattern together with the event sourcing pattern to implement separate write and read paths, interfaces, and data models, each tailored to their relevant tasks. “When CQRS is used with the Event Sourcing pattern, the store of events is the write model, and is the official source of information capturing what has happened or changed, what was the intention, and who was the originator,” explains Kaduk. “All of this is stored on one JSON document for each changed domain aggregate—user, person, and group. The read model provides materialized views that are optimized for querying and are stored in a second, smaller JSON documents. This is all enabled by the Azure Cosmos DB document format and the ability to store different types of documents with different data structures within a single collection.” Find more information on using Event Sourcing together with CQRS.

Custom change feed processing

Instead of using Azure Functions to handle change feed processing, the development team chose to implement its own change feed processing using the Azure Cosmos DB change feed processor library—the same code used internally by Azure Functions. This gave developers more granular control over change feed processing, including the ability to implement retrying over queues, dead-letter event support, and deeper monitoring. The custom change feed processors run on Azure Virtual Machines (VMs) under the “PaaS v1” model.

“Using the change feed processor library gave us superior control in ensuring all SLAs were met,” explains Kaduk. “For example, with Azure Functions, a function can either fail or spin-and-wait while it retries. We can’t afford to spin-and-wait, so we used the change feed processor library to implement a queue that retries periodically and, if still unsuccessful after a day or two, sends the request to a ‘dead letter collection’ for review. We also implemented extensive monitoring—such as how fast requests are processed, which nodes are processing them, and estimated work remaining for each partition.” (See Frantisek’s blog article for a deeper dive into how all this works.)

Cross-partition transactions and integration with other services

Change feed also provided a foundation for implementing background post-processing, such as cross-partition transactions that span the data of more than one user. The case of John blocking Sally from sending him messages is a good example. The system accepts the command from user John to block user Sally, upon which the request is validated and dispatched to the appropriate handler, which stores the event history and updates the query able data for user John. A postprocessor responsible for cross-partition transactions monitors the change feed, copying the information that John blocked Sally into the data for Sally (which likely resides in a different partition) as a reverse block. This information is used for determining the relationship between peers. (More information on this pattern can be found in the article, “Life beyond Distributed Transactions: an Apostate’s Opinion.”)

Similarly, developers used change feed to support integration with other services, such as notification, graph search, and chat. The event is received on background by all running change feed processors, one of which is responsible for publishing a notification to external event consumers, such as Azure Event Hub, using a public schema.

Azure Cosmos DB flowchart

Migration of user data

To facilitate the migration of user data from SQL Server to Azure Cosmos DB, developers wrote a service that iterated over all the user data in the old PCS service to:

  • Query the data in SQL Server and transform it into the new data models for Azure Cosmos DB.
  • Insert the data into Azure Cosmos DB and mark the user’s address book as mastered in the new database.
  • Update a lookup table for the migration status of each user.

To make the entire process seamless to users, developers also implemented a proxy service that checked the migration status in the lookup table for a user and routed requests to the appropriate data store, old or new. After all users were migrated, the old PCS service, the lookup table, and the temporary proxy service were removed from production.

Migration for production flowchart

Migration for production users began in October 2017 and took approximately two months. Today, all requests are processed by Azure Cosmos DB, which contains more than 140 terabytes of data in each of the replicated regions. The new PCS service processes up to 15,000 reads and 6,000 writes per second, consuming between 2.5 million and 3 million RUs per second across all replicas. A process monitors that RU usage automatically scaling allocated RUs up or down as needed.

Continue on to part 3, which covers the outcomes resulting from Skype’s implementation of Azure Cosmos DB.


How Skype modernized its backend infrastructure using Azure Cosmos DB – Part 3

$
0
0

This is a three-part blog post series about how organizations are using Azure Cosmos DB to meet real world needs, and the difference it’s making to them. In part 1, we explored the challenges Skype faced that led them to take action. In part 2, we examined how Skype implemented Azure Cosmos DB to modernize its backend infrastructure. In this post (part 3 of 3), we cover the outcomes resulting from those efforts.

Note: Comments in italics/parenthesis are the author's.

The outcomes

Improved throughout, latency, scalability, and more

Using Azure Cosmos DB, Skype replaced three monolithic, geographically isolated data stores with a single, globally distributed user data service that delivers better throughput, lower latencies, and improved availability. The new PCS service can elastically scale on demand to handle to handle future growth, and gives the Skype team ownership of its data without the burden of maintaining its own infrastructure—all at less than half what it cost to maintain the old PCS system. Development of the solution was fast and straightforward thanks to the extensive functionality provided by Azure Cosmos DB and the fact that it’s a fully-hosted service.

Better throughout and lower latencies

Compared to the old solution, the new PCS service is delivering improved throughput and lower latency—in turn enabling the Skype team to easily meet all its SLAs. “Easy geographic distribution, as enabled by Azure Cosmos DB, was a key enabler in making all this possible,” says Kaduk. “For example, by enabling us to put data closer to where its users are, in Europe, we’ve been able to significantly reduce the time required for the permission service that’s used to setup a call—and meet our overall one-second SLA for that task.”

Higher availability

The new PCS service is supporting its workload without timeouts, deadlocks, or quality-of-service degradation—meaning that users are no longer inconvenienced with bad data or having to wait. And because the service runs on Azure Cosmos DB, the Skype team no longer needs to worry about the availability of the underlying infrastructure upon which its new PCS service runs. 

“Azure Cosmos DB provides a 99.999 percent read availability SLA for all multiregion accounts, with built-in helps protect against the unlikely event of a regional outage,” says Kaduk. “We can prioritize failover order for our multiregion accounts and can even manually trigger failover to test the end-to-end availability of our app—all with guaranteed zero data-loss.”

Elastic scalability

With Azure Cosmos DB, the Skype team can independently and elastically scale storage and throughput at any time, across the globe. All physical partition management required to scale is fully managed by Azure Cosmos DB and is transparent to the Skype team. Azure Cosmos DB handles the distribution of data across physical and logical partitions and the routing of query requests to the right partition—all without compromising availability, consistency, latency, or throughput. All this enables the team to pay for only the storage and throughput it needs today, and to avoid having to invest any time, energy, or money in spare capacity before it’s needed.

“The ability of Azure Cosmos DB to scale is obvious,” says Kaduk. “We planned for 100 terabytes of data 18 months ago and are already at 140 terabytes, with no major issues handling that growth.

Full ownership of data – with zero maintenance and administration

Because Azure Cosmos DB is a fully managed Microsoft Azure service, the Skype team doesn’t need to worry about day-to-day administration, deploy and configure software, or deal with upgrades. Every database is automatically backed up, protected against regional failures, and encrypted, so you the team doesn’t need to worry about those things either—leaving it with more time to focus on delivering new customer value.

“One of the great things about our new PCS service is that we fully own the data store, whereas we didn’t before,” says Kaduk. “In the past, when Skype was first acquired by Microsoft, we had a team that maintained our databases. We didn’t want to continue maintaining them, so we handed them off to a central team. Today, that same user data is back under our full control and we’re still not burdened with day-to-day maintenance—it’s really the best of both worlds.”

Lower costs

Although Kaduk’s team wasn’t paying to maintain the old PCS databases, he knows what that used to cost—and says that the monthly bill for the new solution running on Azure Cosmos DB is much lower. “Our new PCS data store is about 40 percent less expensive than the old one was,” he states. “We pay that cost ourselves today, but, given all the benefits, it’s well worth it.”

Rapid, straightforward implementation

All in all, Kaduk feels the migration to Azure Cosmos DB was “pretty simple and straightforward.” Development began in May 2017, and by October 2017, all development was complete and the team began migrating all 4 billion Skype users to the new solution. The team consisted of eight developers, one program manager, and one manager.

“We had no prior experience with Azure Cosmos DB, but it was pretty easy to come up to speed,” he states. “Even with a few lessons learned, we did it all in six months, which is pretty impressive for a project of this scale. One reason for our rapid success was that we didn’t have to worry about deploying any physical infrastructure. Azure Cosmos DB also gave us a schema-free document database with both SQL syntax and change feed streaming capabilities built-in, all under strict SLAs. This greatly simplified our architecture and enabled us to meet all our requirements in a minimum amount of time.”

Lessons learned

Looking back at the project, Kaduk recalls several “lessons learned.” These include:

  • Use direct mode for better performance How a client connects to Azure Cosmos DB has important performance implications, especially with respect to observed client side latency. The team began by using the default Gateway Mode connection policy, but switched to a Direct Mode connection policy because it delivers better performance.
  • Learn how to write and handle stored procedures – With Azure Cosmos DB, transactions can only be implemented using stored procedures—pieces of application logic that are written in JavaScript that are registered and executed against a collection as a single transaction. (In Azure Cosmos DB, JavaScript is hosted in the same memory space as the database. Hence, requests made within stored procedures execute in the same scope of a database session, which enables Azure Cosmos DB to guarantee ACID for all operations that are part of a single stored procedure.)
  • Pay attention to query design – With Azure Cosmos DB, queries have a large impact in terms of RU consumption. Developers didn’t pay much attention to query design at first, but soon found that RU costs were higher than desired. This led to an increased focus on optimizing query design, such as using point document reads wherever possible and optimizing the query selections per API.
  • Use the Azure Cosmos DB SDK 2.x to optimize connection usage – Within Azure Cosmos DB, the data stored in each region is distributed across tens of thousands of physical partitions. To serve reads and writes, the Azure Cosmos DB client SDK must establish a connection with the physical node hosting the partition. The team started by using the Azure Cosmos DB SDK 1.x, but found that its lack of support for connection multiplexing led to excessive connection establishment and closing rates. Switching to the Azure Cosmos DB SDK 2.x, which supports connection multiplexing, helped solve the problem —and also helped mitigate SNAT port exhaustion issues.

The following diagram shows connection status and time_waits when using SDK 1.x.

Chart showing connection when using SDK 1.x

And the following shows the same after the move to SDK 2.x.

Chart showing connection when using SDK 2.x

Azure.Source – Volume 77

$
0
0

Preview | Generally available | News & updates | Technical content | Azure shows | Events | Customers, partners, and industries

Now in preview

Announcing the Azure Functions Premium plan for enterprise serverless workloads

We are pleased to announce the Azure Functions Premium plan in preview, our newest Functions hosting model. This plan enables a suite of long requested scaling and connectivity options without compromising on event-based scale. With the Premium plan you can use pre-warmed instances to run your app with no delay after being idle, you can run on more powerful instances, and you can connect to VNETs, all while automatically scaling in response to load.

Graphical table showing comparison of Consumption and Premium plans

Windows Server 2019 support now available for Windows Containers on Azure App Service

We are happy to announce Windows Server 2019 Container support in public preview. Using a custom Windows container in App Service lets you make OS changes that your app needs, so it's easy to migrate on-premises app that requires custom OS and software configuration. Windows Container support is available in our West US, East US, West Europe, North Europe, East Asia, and East Australia regions. Windows Containers are not supported in App Service Environments at present.

Web application firewall at Azure Front Door service

We have heard from many of you that security is a top priority when moving web applications onto the cloud. Today, we are very excited to announce our public preview of the Web Application Firewall (WAF) for the Azure Front Door service.  By combining the global application and content delivery network with natively integrated WAF engine, we now offer a highly available platform helping you deliver your web applications to the world, secure and fast!

Azure Media Services: The latest Video Indexer updates from NAB Show 2019

After sweeping up multiple awards with the general availability release of Azure Media Services’ Video Indexer, including the 2018 IABM for innovation in content management and the prestigious Peter Wayne award, our team has remained focused on building a wealth of new features and models to allow any organization with a large archive of media content to unlock insights from their content; and use those insights improve searchability, enable new user scenarios and accessibility, and open new monetization opportunities. At NAB Show 2019, we are announcing a wealth of new enhancements to Video indexer’s models and experiences.

Now generally available

Extending Azure security capabilities

As more organizations are delivering innovation faster by moving their businesses to the cloud, increased security is critically important for every industry. Azure has built-in security controls across data, applications, compute, networking, identity, threat protection, and security management so you can customize protection and integrate partner solutions. Microsoft Azure Security Center is the central hub for monitoring and protecting against related incidents within Azure. We love making Azure Security Center richer for our customers, and were excited to share some great updates last week at Hannover Messe 2019. Read on to learn about them.

Photo of Azure Dedicated Hardware Security Module (HSM)

Event-driven Java with Spring Cloud Stream Binder for Azure Event Hubs

Spring Cloud Stream Binder for Azure Event Hubs is now generally available. It is now easier to build highly scalable event-driven Java apps using Spring Cloud Stream with Event Hubs, a fully managed, real-time data ingestion service on Azure that is resilient and reliable service for any situation. This includes emergencies, thanks to its geo-disaster recovery and geo-replication features.

Fast and optimized connectivity and delivery solutions on Azure

We’re announcing the availability of innovative and industry leading Azure services that will help the attendees of the National Association of Broadcasters Show realize their future vision to deliver for their audiences:  Azure Front Door Service (AFD), ExpressRoute Direct and Global Reach, as well as some cool new additions to both AFD and our Content Delivery Network (CDN). April 6-11, Microsoft will be at NAB Show 2019 in Las Vegas, bringing together an industry centered on the ability to deliver richer content experiences for audiences around  the word.

Azure Front Door Service is now generally available

We’re announcing the general availability of Azure Front Door Service (AFD) which we launched in preview last year – a scalable and secure entry point for fast delivery of your global applications. AFD is your one stop solution for your global website/application. Azure Front Door Service enables you to define, manage, and monitor the global routing for your web traffic by optimizing for best performance and instant global failover for high availability. With Front Door, you can transform your global (multi-region) consumer and enterprise applications into robust, high-performance personalized modern applications, APIs, and content that reach a global audience with Azure.

News and updates

Unlock dedicated resources and enterprise features by migrating to Service Bus Premium

Azure Service Bus has been the Messaging as a Service (MaaS) option of choice for our enterprise customers. We’ve seen tremendous growth to our customer base and usage of the existing namespaces, which inspires us to bring more features to the service. We recently expanded Azure Service Bus to support all Azure regions with Availability Zones to help our customers build more resilient solutions. We also expanded the Azure Service Bus Premium tier to more regions to enable our customers to leverage many enterprise ready features on their Azure Service Bus namespaces while also being closer to their customers.

Screenshot of Azure Service Bus migration in the Azure portal

Device template library in IoT Central

With the new addition of a device template library into our Device Templates page, we are making it easier than ever to onboard and model your devices. Now, when you get started with creating a new template, you can choose between building one from scratch or you can quickly select from a library of existing device templates. Today you’ll be able to choose from our MXChip, Raspberry Pi, or Windows 10 IoT Core templates. We will be working to improve this library by adding more device templates which provide customer value.

Alerts in Azure are now all the more consistent!

Azure Monitor alerts provides rich alerting capabilities on a variety of telemetry such as metrics, logs, and activity logs. Over the past year, we have unified the alerting experience by providing a common consumption experience including UX and API for alerts. However, the payload format for alerts remained different which puts the burden of building and maintaining multiple integrations, one for each alert type based on telemetry, on the user. We released a new common alert schema that provides a single extensible format for all alert types.

GPS Week Number Rollover – Microsoft has you covered!

Microsoft has completed preparations for the upcoming GPS Week Number Rollover to ensure that users of Microsoft time sources do not experience any impact. Microsoft is aware of this upcoming transition and has reviewed devices and procedures to ensure readiness. Azure products and services that rely on GPS timing devices have received declaration of compliance with IS-GPS-200 from the device manufacturers, mitigating risk to users of Microsoft time sources.

Microsoft Azure portal April 2019 update

This month’s updates include improvements to IaaS, Azure Data Explorer, Security Center, Recovery Services, Role-Based Access Control, Support, and Intune.

Updates to geospatial features in Azure Stream Analytics – Cloud and IoT edge

Azure Stream Analytics is a fully managed PaaS service that helps you run real-time analytics and complex event processing logic on telemetry from devices and applications. We announced several enhancements to geospatial features in Azure Stream Analytics. These features will help customers manage a much larger set of mobile assets and vehicle fleet easily, accurately, and more contextually than previously possible. These capabilities are available both in the cloud and on Azure IoT edge.

Self-service exchange and refund for Azure Reservations

Azure Reservations provide flexibility to help meet your evolving needs. You can exchange a reservation for another reservation of the same type, and you can refund a reservation if you no longer need it.

Azure Sphere Retail and Retail Evaluation feeds

Azure Sphere developers might have noticed that we now have two Azure Sphere OS feeds where once there was only one. The Azure Sphere Preview feed that delivered over-the-air OS updates has been replaced by feeds named Retail Azure Sphere OS and Retail Evaluation Azure Sphere OS. The Retail feed provides a production-ready OS and is intended for broad deployment to end-user installations. The Retail Evaluation feed provides each new OS for 14 days before we release it to the Retail feed. It is intended for backwards compatibility testing.

Azure Updates

Learn about important Azure product updates, roadmap, and announcements. Subscribe to notifications to stay informed.

Technical content

Step up your machine learning process with Azure Machine Learning service

The Azure Machine Learning service provides a cloud-based service you can use to develop, train, test, deploy, manage, and track machine learning models. With Automated Machine Learning and other advancements available, training and deploying machine learning models is easier and more approachable than ever. Automated machine learning helps users of all skill levels accelerate their pipelines, leverage open source frameworks, and scale easily. Automated machine learning, a form of deep machine learning, makes machine learning more accessible across an organization.

Schema validation with Event Hubs

Event Hubs is fully managed, real-time ingestion Azure service. It integrates seamlessly with other Azure services. It also allows Apache Kafka clients and applications to talk to Event Hubs without any code changes. Apache Avro is a binary serialization format. It relies on schemas (defined in JSON format) that define what fields are present and their type. Since it's a binary format, you can produce and consume Avro messages to and from the Event Hubs. Event Hubs' focus is on the data pipeline. It doesn't validate the schema of the Avro events.

SheHacksPurple: Changes to Azure Security Center Subscription

In this short video Tanya Janca will describe recent changes to Azure Security Center Subscription coverage; it now covers storage containers and app service.

Thumbnail from SheHacksPurple: Changes to Azure Security Center Subscription

PowerShell Basics: Finding the right VM size with Get-AzVMSize

Finding the right virtual machine for your needs can be difficult especially with all of the options available. New options seem to come around often so you may need to regularly check the VMs available within your Azure region. Using PowerShell makes it quick and easy to see all of the VM sizes so you can get to building your infrastructure, and Az-VM will help you determine the VM sizes you can deploy in specific regions, into availability sets, or what size a machine in your environment is running.

Hands-on Lab: Creating an IoT Solution with Kotlin Azure Functions

Dave Glover walks through building an end-to-end IoT Solution with Azure IoT Hub, IoT Hub, Kotlin based Azure Functions and Azure SignalR.

An Ambivert’s Guide to Azure Functions

Chloe Condon will walk you through how to use Azure Functions, Twilio, and a Flic Button to create an app to trigger calls/texts to your phone.

Making Machine Learning Approachable

Often we hear about machine learning and deep learning as a topic that only researchers, mathematicians, or PhDs can be smart enough grasp. It is possible to explain seemingly complex fundamental concepts and algorithms of machine learning without using cryptic terminology or confusing notation.

Monitoring on Azure HDInsight Part 2: Cluster health and availability

This is the second blog post in a four-part series on Monitoring on Azure HDInsight. Monitoring on Azure HDInsight Part 1: An Overview discusses the three main monitoring categories: cluster health and availability, resource utilization and performance, and job status and logs. This blog covers the first of those topics, cluster health and availability, in more depth.

Azure shows

Episode 273 - Application Patterns in Azure | The Azure Podcast

Rasmus Lystrøm, a Senior Microsoft consultant from Denmark, shares his thoughts and ideas around building applications that take advantage of Azure and allow developers to focus on the business problem at hand.

Azure Blob Storage on Azure IoT Edge | Internet of Things Show

Azure Blob Storage on IoT Edge is a light-weight Azure Consistent module which provides local Block blob storage. It comes with configurable abilities to: Automatically tier the data from IoT Edge device to Azure; Automatically delete the data from IoT edge device after specified time.

Azure Pipelines | Visual Studio Toolbox

In this episode, Robert is joined by Mickey Gousset, who takes us on a tour of Azure Pipelines. He shows how straightforward it is to automate your builds and deployments using Azure Pipelines. They are a great way to started on your path to using DevOps practices to ship faster at higher quality.

Deploy WordPress with Azure Database for MariaDB | Azure Friday

Learn how to deploy WordPress backed by Azure Database for MariaDB. It is the latest addition to the open source database services available on the Azure platform and further strengthens Azure's commitment to open source and its communities. The service offers built-in high availability, automatic backups, and scaling of resources to meet your workload's needs.

Hybrid enterprise serverless in Microsoft Azure | Microsoft Mechanics

Apply serverless compute securely and confidently to any workload with new enterprise capabilities. Jeff Hollan, Sr. Program Manager from the Azure Serverless team, demonstrates how you can turn on managed service identities and protect secrets with Key Vault integration, control virtual network connectivity for both Functions and Logic Apps, build apps that integrate with systems inside your virtual network using event-driven capabilities and set cost thresholds to control how much you want to scale with the Azure Functions Premium plan.

Hybrid enterprise serverless in Microsoft Azure | Microsoft Mechanics

Virtual node autoscaling and Azure Dev Spaces in Azure Kubernetes Service (AKS) | Microsoft Mechanics

Recent updates to the Azure Kubernetes Service (AKS) for developers and ops. Join, Program Manager for Azure Kubernetes Service, Ria Bhatia as she shows you the new autoscaling options using virtual nodes as well as how you can use Azure Dev Spaces to test your AKS apps without simulating dependencies. Also, check out the and new ways to troubleshoot and monitor your Kubernetes apps with Azure Monitor.

Virtual node autoscaling and Azure Dev Spaces in Azure Kubernetes Service (AKS) | Microsoft Mechanics

How to host a static website with Azure Storage | Azure Tips and Tricks

In this edition of Azure Tips and Tricks, learn how you can host a static website running in Azure Storage in a few steps.

How to host a static website with Azure Storage | Azure Tips and Tricks

How to use the Azure Activity Log | Azure Portal Series

The Azure Activity Log informs you of the who, the what and the when for operations in your Azure resources. In this video of the Azure Portal “How To” Series, learn what activity logs are in the Azure Portal, how to access it, and how to make use of them.

How to use the Azure Activity Log | Azure Portal Series

Ted Neward on the ‘Ops’ Side of DevOps | Azure DevOps Podcast

Ted Neward and Jeffrey Palermo are going to be talking about the ‘Ops’ (AKA the operations) side of DevOps. They discuss how operations is implemented in the DevOps movement, the role of operations, how Dev and Ops should work together, what companies should generally understand around the different roles, where the industry is headed, and Ted’s many recommendations in the world of DevOps.

Episode 5 - CodeCamping with Philly.NET founder Bill Wolff | AzureABILITY

Philly.NET founder and coding-legend Bill Wolff visits the podcast to talk about both the forthcoming Philly Code Camp 2019.1 and the user-group experience in general.

Events

Welcome to NAB Show 2019 from Microsoft Azure!

At NAB Show 2019 this week in Las Vegas we’re announcing new Azure rendering, Azure Media Services, Video Indexer and Azure Networking capabilities to help you achieve more. We’ll also showcase how partners such as Zone TV and Nexx.TV are using Microsoft AI and Azure Cognitive Services to create more personalized content and improve monetization of existing media assets.

Deliver New Services | Hannover Messe 2019

With intelligent manufacturing technology, you can deliver new services, innovate faster to reduce time to market, and increase your margins. At the Hannover Messe 2019 event, discover how Microsoft and partners are empowering companies to create new business value with digital services to develop data-driven and AI-enhanced products and services.

Deliver New Services | Hannover Messe 2019

Database administrators, discover gold in the cloud

Data is referred to these days as “the new oil” or “black gold” of industry. If the typical Fortune 100 company gains access to a mere 10 percent more of their data, that can result in increased revenue of millions of dollars. Recently, one of our teams discovered new technology that enables us to do more with less—like agile development helping us deploy new features and software faster to market, and DevOps ensuring it was done with less impact to mission-critical systems. To learn more, attend a free webinar where we’ll be sharing more on the many advantages of managing data in the cloud, and how your company’s “black gold” will make you tomorrow’s data hero.

Customers, partners, and industries

IoT in Action: Enabling cloud transformation across industries

The intelligent cloud and intelligent edge are sparking massive transformation across industries. As computing gets more deeply embedded in the real world, powerful new opportunities arise to transform revenue, productivity, safety, customer experiences, and more. According to a white paper by Keystone Strategy, digital transformation leaders generate eight percent more per year in operating income than other enterprises. Here we lay out a typical cloud transformation journey and provide examples of how the cloud is transforming city government, industrial IoT, and oil and gas innovators.

Enabling precision medicine with integrated genomic and clinical data

Kanteron Systems Platform is a patient-centric, workflow-aware, precision medicine solution. Their solution to data in silos, detached from the point of care integrates many key types of healthcare data for a complete patient longitudinal record to power precision medicine including medical imaging, digital pathology, clinical genomics, and pharmacogenomic data.

Spinnaker continuous delivery platform now with support for Azure

Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. It is being chosen by a growing number of enterprises as the open source continuous deployment platform used to modernize their application deployments. With this blog post and the recent release of Spinnaker (1.13), we are excited to announce that Microsoft has worked with the core Spinnaker team to ensure Azure deployments are integrated into Spinnaker.

The future of manufacturing is open

At Hannover Messe 2019, we launched the Open Manufacturing Platform (OMP) together with the BMW Group, our partner on this initiative. Built on the Microsoft Azure Industrial IoT cloud platform, the OMP will provide a reference architecture and open data model framework for community members who will both contribute to and learn from others around industrial IoT projects.

 


Azure Stack HCI solutions, Premium Block Blob Storage and new capabilities in the Azure AI space! | Azure This Week - A Cloud Guru

This time on Azure This Week, Lars discusses Microsoft’s hybrid cloud strategy which gets another push with hyper-converged infrastructure, Azure Premium Block Blob Storage is now generally available, and AI developers get more goodies on the Azure platform.

Azure Stack HCI solutions, Premium Block Blob Storage andnew capabilities in the Azure AI space! | Azure This Week - A Cloud Guru

Be sure to check out the new series from A Cloud Guru, Azure Fireside Chats.

What to expect in the new Microsoft Edge Insider Channels

$
0
0

Today we are shipping the first Dev and Canary channel builds of the next version of Microsoft Edge, based on the Chromium open-source project. We’re excited to be sharing this work at such an early stage in our development process. We invite you to try out the preview today on your devices, and we look forward to working together with the Microsoft Edge Insider community to make browsing the best experience possible for everyone.

In this post, we’ll walk you through how the new release channels work, and share a closer look at our early work in the Chromium open source project, as well as what’s coming next.

Introducing the Microsoft Edge Insider Channels

The new Microsoft Edge builds are available through preview channels that we call “Microsoft Edge Insider Channels.” We are starting by launching the first two Microsoft Edge Insider Channels, Canary and Dev, which you can download and try at the Microsoft Edge Insider site. These channels are available starting today on all supported versions of Windows 10, with more platforms coming soon.

Screenshot of download page showing three Microsoft Edge Insider Channels - Beta Channel, Dev Channel, and Canary Channel

Canary channel will be updated daily, and Dev channel will be updated weekly. You can even choose to install multiple channels side-by-side for testing—they will have separate icons and names so you can tell them apart. Support for other platforms, like Windows 7, Windows 8.1, macOS, and other channels, like Beta and Stable, will come later.

Every night, we produce a build of Microsoft Edge―if it passes automated testing, we’ll release it to the Canary channel. We use this same channel internally to validate bug fixes and test brand new features. The Canary channel is truly the bleeding edge, so you may discover bugs before we’ve had a chance to discover and fix them. If you’re eager for the latest bits and don’t mind risking a bug or two, this is the channel for you.

If you prefer a build with slightly more testing, you might be interested in the Dev channel. The Dev channel is still relatively fresh―it’s the best build of the week from the Canary channel. We look at several sources, like user feedback, automated test results, performance metrics, and telemetry, to choose the right Canary build to promote to the Dev channel. If you want to use the latest development version of Microsoft Edge as a daily driver, this is the channel for you. We expect most users will be on the Dev channel.

Later, we will also introduce the Beta and Stable channels. The Beta channel reflects a significantly more stable release and will be a good target for Enterprises and IT Pros to start piloting the next version of Microsoft Edge.

We are not changing the existing version of Microsoft Edge installed on your devices at this time – it will continue to work side by side with the builds from any of the Microsoft Edge Insider Channels.

Adopting and contributing to the Chromium open source project

When we initially announced our decision to adopt Chromium as the foundation for future versions of Microsoft Edge, we published a set of open source principles and declared our intent to contribute to the Chromium project to make Microsoft Edge and other Chromium-based browsers better on PCs and other devices.

While we will continue to focus on delivering a world class browsing experience with Microsoft Edge’s user experience and connected services, when it comes to improving the web platform, our default position will be to contribute to the Chromium project.

We still have a lot to learn as we increase our use of and contributions to Chromium, but we have received great support from Chromium engineers in helping us get involved in this project, and we’re pleased to have landed some modest but meaningful contributions already. Our plan is to continue working in Chromium rather than creating a parallel project, to avoid any risk of fragmenting the community.

Our early contributions include landing over 275 commits into the Chromium project since we joined this community in December. We also have started to make progress on some of the initial areas of focus we had shared:

Accessibility

We are committed to building a more accessible web platform for all users. Today, Microsoft Edge is the only browser to earn a perfect score on the HTML5Accessibility browser benchmark, and we’re hoping to bring those contributions to the Chromium project and improve web experiences for all users.

  • Modern accessibility APIs. To enable a better accessibility experience for screen readers, like Windows Narrator, magnifiers, braille displays, and other accessibility tools, we’ve shared our intent to implement support for the Microsoft UI Automation interfaces, a more modern and secure Windows accessibility framework, in Chromium. We’re partnering with Google’s Accessibility team and other Chromium engineers to land commits and expect the full feature to be completed later this year.
  • High contrast. To ensure our customers have the best readability experience, we’re working in the W3C CSS working group to standardize the high-contrast CSS Media query and have shared our intent to implement it in Chromium. This will allow customers to use the Windows Ease of Access settings to select their preferred color contrast settings to improve content readability on Windows devices.
  • HTML video caption styling. We’ve partnered with Chromium engineers to land support for Windows Ease of Access settings to improve caption readability on Windows 10.
  • Caret browsing. For customers who use their keyboard to navigate the web and select text, we’ve shared our intent to implement caret browsing in Chromium.
  • We’re starting to work with our Chromium counterparts to improve the accessibility of native web controls, like media and input controls. Over time we expect this work will help Chromium earn a perfect score on the HTML5Accessibility browser benchmark.

ARM64

We’ve been collaborating with Google engineers to enable Chromium to run natively on Windows on ARM devices starting with Chromium 73. With these contributions, Chromium-based browsers will soon be able to ship native implementations for ARM-based Windows 10 PCs, significantly improving their performance and battery life.

Touch

To help our customers with touch devices get the best possible experience, we’ve implemented better support for Windows touch keyboard in Chromium, now supporting touch text suggestions as you type and “shape writing” that lets you type by swiping over keys without releasing your finger.

Scrolling

Microsoft Edge is known for class-leading scrolling experiences on the web today, and we’re collaborating closely with Chromium engineers to make touchpad, touch, mouse wheel, keyboard, and sidebar scrolling as smooth as possible. We’re still early in this investigation, but have started sharing some ideas in this area.

Media

Premium media sites use the encrypted media extensions (EME) web standard and digital rights management (DRM) systems to protect streaming media content so that it can only be played by users authorized by the streaming service. In fact, Microsoft and other industry partners were recognized with a Technology & Engineering Emmy award yesterday for helping bring premium media to the web through this and other web standards. To provide users with the highest level of compatibility and web developers with technology choice, Microsoft Edge now supports both Microsoft PlayReady and Google Widevine DRM systems.

While Microsoft Edge often gets highest resolution and bitrate video because it uses the robust hardware-backed Microsoft PlayReady DRM, there are some sites that only support the Google Widevine DRM system. Sites that rely on hardware-backed PlayReady DRM on Microsoft Edge will be able to continue to stream 1080p or 4k with high dynamic range (HDR) or Dolby Vision, while those that only support Widevine will just work in Microsoft Edge for the first time.

We also want to help contribute improvements to video playback power efficiency that many of our Microsoft Edge users have come to expect. We’re early in these investigations but will be partnering closely with the Chromium team on how we can help improve this space further.

Windows Hello

Microsoft Edge supports the Windows Hello authenticator as a more personal and secure way to use biometrics authentication on the web for password-less and two-factor authentication scenarios. We’ve worked with the Chromium team to land Windows Hello support in the Web Authentication API in Chromium – you can try that experience out today by using Microsoft Edge Dev or Canary preview builds on the latest Windows 10 Insider Preview release.

Evolving the web through standards

While we’re participating in the Chromium open source project, we still believe the evolution of the open web is best served though the standards communities, and the open web benefits from open debate from a wide variety of perspectives.

We are continuing to remain deeply engaged in standards discussions where the perspectives of vendors developing different browsers and the larger web community can be heard and considered. You can keep track of all Microsoft explainer documents on the Microsoft Edge Explainers GitHub.

HTML Modules

For example, we recently introduced the HTML Modules proposal, which is now being developed in the W3C and WHATWG Web Components Incubation Groups.

We’ve heard from web developers that while ES6 Script Modules are a great way for developers to componentize their code and create better dependency management systems, the current approach doesn’t help developers who use declarative HTML markup. This has forced developers to re-write their code to generate markup dynamically.

We’ve taken lessons learned from HTML Imports to introduce an extension of the ES6 Script Modules system to include HTML Modules. Considering the early support we’ve received on this feature from standards discussions, we’ve also shared our intent to implement this feature in Chromium.

User Agent String

With Microsoft Edge adopting Chromium, we are changing our user agent string to closely resemble that of the Chromium user agent string with the addition of the “Edg” token. If you’re blocking site access on user agent strings, please update your logic to treat this string as another Chromium-based browser.

Below is the user agent string for the latest Dev Channel build of Microsoft Edge:

Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.48 Safari/537.36 Edg/74.1.96.24

We’ve selected the “Edg” token to avoid compatibility issues that may be caused by using the string “Edge,” which is used by the current version of Microsoft Edge based on EdgeHTML. The “Edg” token is also consistent with existing tokens used on iOS and Android. We recommend that developers use feature detection where possible and avoid browser version detection through the user agent string, as it results in more maintenance and fragile code.

User Experience

We are committed to building a world class browser with Microsoft Edge through differentiated user experience features and connected services. With this initial release, we have made a number of changes to the user interface to make our product feel more like Microsoft Edge.

However, you will continue to see the look and feel of the browser evolve in future releases as we iterate and listen to customer feedback. We do not plan to contribute user experience or Microsoft service changes to Chromium, since browser vendors generally like to make their own decisions in these areas.

We know that this initial release is still missing a few features that are available in the current version of Microsoft Edge. We’re in the early stages and are intentionally focusing on fundamentals as we continue to work towards a complete feature set.

Over time, we will roll out new features and run experiments to gauge user interest and satisfaction, and to assess the quality of each new feature or improvement. This will help us ensure that all new features address our customers’ needs in the best possible way and meet our quality standards.

Integration with Microsoft services

While the next version of Microsoft Edge will be based on Chromium, we intend to use the best of Microsoft wherever we can, including our services integrations. Some of these services integrations include:

  • Bing Search powers search and address bar suggestions by default.
  • Windows Defender SmartScreen delivers best-in-class phishing and malware protection when navigating to sites and downloading content.
  • Microsoft Account service and Azure Active Directory can now be used to sign-in to the browser to help you manage both your personal and work accounts. You can even use multiple identities at the same time in different browser sessions.
  • Microsoft Activity Feed Service synchronizes your data across Microsoft Edge preview builds. We currently synchronize your favorites across your Windows 10 desktop devices running Microsoft Edge preview builds. In future builds, we will also sync passwords, browsing history, and other settings across all supported platforms, including Microsoft Edge on iOS and Android.
  • Microsoft News powers the new tab experience, giving you the choice of an inspirational theme with vivid Bing images, a focused theme that helps you get straight to work, or a more news focused informational theme.

Feedback

Getting your feedback is an important step in helping us make a better browser – we consider it essential to create the best possible browsing experience. If you run into any issues or have feedback, please use the “Send Feedback” tool in Microsoft Edge. Simply click the smiley face next to the Menu button and let us know what you like or if there’s something we can improve.

For web developers, if you encounter an issue that reproduces in Chromium, it’s best to file a Chromium bug. For problems in the existing version of Microsoft Edge, please continue to use the EdgeHTML Issue Tracker.

You can also find the latest information on the next version of Microsoft Edge and get in touch with the product team to share feedback or get help on the Microsoft Edge Insider site.

We’re delighted to share our first Canary and Dev builds of the next version of Microsoft Edge! We hope you’ll try the preview out today, and we look forward to hearing your feedback in the Microsoft Edge Insider community.

Jatinder Mann, Group Program Manager, Web Platform
John Hazen, Group Program Manager, Operations

The post What to expect in the new Microsoft Edge Insider Channels appeared first on Microsoft Edge Blog.

Want to evaluate your cloud analytics provider? Here are the three questions to ask.

$
0
0

We all want the truth. To properly assess your cloud analytics provider, ask them about the only three things that matter:

  1. Independent benchmark results
  2. Company-wide access to insights
  3. Security and privacy

What are their results on independent, industry-standard benchmarks? 

Perhaps you’ve heard from other providers that benchmarks are irrelevant. If that’s what you’re hearing, maybe you should be asking yourself why? Independent, industry-standard benchmarks are important because they help you measure price and performance on both common and complex analytics workloads. They are essential indicators of value because as data volumes grow, it is vital to get the best performance you can at the lowest price possible.

In February, an independent study by GigaOm compared Azure SQL Data Warehouse, Amazon Redshift, and Google BigQuery using the highly recognized TPC-H benchmark. They found that Azure SQL Data Warehouse is up to 14x faster and costs 94 percent less than other cloud providers. And today, we are pleased to announce that in GigaOm’s second benchmark report, this time with the equally important TPC-DS benchmark, Azure SQL Data Warehouse is again the industry leader. Not Amazon Redshift. Not Google BigQuery. These results prove that Azure is the best place for all your analytics.

Price performance comparison

This is why customers like Columbia Sportswear choose Azure.

“Azure SQL Data Warehouse instantly gave us equal or better performance as our current system, which has been incrementally tuned over the last 6.5 years for our demanding performance requirements.”

Lara Minor, Sr. Enterprise Data Manager, Columbia Sportswear

Columbia Sportswear logo 

Can they easily deliver powerful insights across your organization?

Insights from your analytics must be accessible to everyone in your organization. While other providers may say they can deliver this, the end result is often catered to specific workgroups versus being an enterprise-wide solution. Data can become quickly siloed in these situations, making it difficult to deliver insights across all users.

With Azure, employees can get their insights in seconds from all enterprise data. Data can seamlessly flow from your SQL Data Warehouse to Power BI. And without limitations on concurrency, Power BI can be used across teams to create the most beautiful visualizations that deliver powerful insights. This combination of powerful analytics with easy-to-use BI is quite unique. In fact, if you look at the Gartner 2019 Magic Quadrant for Analytics and Business Intelligence Platforms and the Gartner 2019 Magic Quadrant for Data Management Solutions for Analytics below, you’ll see that Microsoft is a Leader.

 

Gartner 2019 Magic Quadrant for Analytics and Business Intelligence Platforms and the Gartner 2019 Magic Quadrant for Data Management Solutions for Analytics

 

Our leadership position in BI, coupled with our undisputed performance in analytics means that customers can truly provide business-critical insights to all. As the TPC-DS benchmark demonstrates, Azure SQL Data Warehouse provides unmatched performance on complex analytics workloads that mimic the realities of your business. This means that Power BI users can effortlessly gain granular-level insights across all their data.

The TPC-DS industry benchmark I mentioned above is particularly useful for organizations that run intense analytics workloads because it uses demanding queries to test actual performance. For instance, one of the queries used in the TPC-DS benchmark report calculates the number of orders, time window for the orders, and filters by state on non-returned orders shipped from a single warehouse. This type of complex query, which spans across billions of rows and multiple tables, is a real-world example of how companies use a data warehouse for business insights. And with Power BI, users can perform intense queries like this by easily integrating with SQL Data Warehouse for fast, industry-leading performance.

How robust is their security?

Everyone is a target. When it comes to data, privacy and security are non-negotiable. No matter how cautious you are, there is always a threat lurking around the corner. Your analytics system contains the most valuable business data and must have both stringent security and privacy capabilities.

Azure has you covered. As illustrated by Donald Farmer, a well-respected thought leader in the analytics space, analytics in Azure has the most advanced security and privacy features in the market. From proactive threat detection to providing custom recommendations that enhance security, Azure SQL Data Warehouse uses machine learning and AI to secure your data. It also enables you to encrypt your data, both in flight and at rest. You can provide users with appropriate levels of access, from a single source, using row and column level security. This not only secures your data, but also helps you meet stringent privacy requirements.

It was immediately clear to us that with Azure, particularly Azure Key Vault, we would be able to meet our own rigorous requirements for data protection and security.”

Guido Vetter, Head of Corporate Center of Excellence Advanced Analytics & Big Data, Daimler

Daimler logo

Azure’s leading security and data privacy features not only make it the most trusted cloud in the market, but also complements its leadership in other areas, such as price-performance, making it simply unmatched.

Get started today

To learn more about Azure’s industry-leading price-performance and security, get started today!

 

 

Gartner Magic Quadrant for Analytics and Business Intelligence Platforms Cindi Howson, James Richardson, Rita Sallam, Austin Kronz, 11 February 2019.

Gartner Magic Quadrant for Data Management Solutions for Analytics, Adam Ronthal, Roxane Edjlali, Rick Greenwald, 21 January 2019.

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from Microsoft.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Smarter, faster, safer: Azure SQL Data Warehouse is simply unmatched

$
0
0

Today, we want to call attention to the exciting news that Azure SQL Data Warehouse has again outperformed other cloud providers in the most recent GigaOm benchmark report.

This is the result of relentless innovation and laser-focused execution on providing new features our customers need, all while reducing prices so customers get industry-leading performance at the best possible value. In just the past year, SQL Data Warehouse has released 130+ features focused on providing customers with enhanced speed, flexibility, and security. And today we are excited to announce three additional enhancements that continue to make SQL Data Warehouse the industry leader:

  • Unparalleled query performance
  • Intelligent workload management
  • Unmatched security and privacy

In this blog, we’ll take a closer look at the technical capabilities of these new features and, most importantly, how you can start using them today.

Unparalleled query performance

In our March 2019 release, a collection of newly available features improved workload performance by up to 22x compared to previous versions of Azure SQL Data Warehouse, which contributed to our leadership position in both the TPC-H and TPC-DS benchmark reports.

This didn’t just happen overnight. With decades of experience building industry-leading database systems, like SQL Server, Azure SQL Data Warehouse is built on top of the world’s largest cloud architectures.

Key innovations that have improved query performance include:

  • Query Optimizer enhancements
  • Instant Data Movement
  • Additional advanced analytic functions

Query Optimizer enhancements

Query Optimizer is one of the most critical components in any database. Making optimal choices on how to best execute a query can and does yield significant improvement. When executing complex analytical queries, the number of operations to be executed in a distributed environment matters. Every opportunity to eliminate redundant computation, such as repeated subqueries, has a direct impact to query performance. For instance, the following query is reduced from 13 down to 5 operations using the latest Query Optimizer enhancements.

Animated GIF displaying Query Optimizer enhancements

Instant Data Movement

For a distributed database system, having the most efficient data movement mechanism is also a critical ingredient in achieving great performance. Instant Data Movement was introduced with the launch of the second generation of Azure SQL Data Warehouse. To improve instant data movement performance, broadcast and partition data movement operations were added. In addition, performance optimizations around how strings are processed during the data movement operations yielded improvements of up to 2x.

Advanced analytic functions

Having a rich set of analytic functions simplifies how you can write SQL across multiple dimensions that not only streamlines the query, but improves its performance. A set of such functions is GROUP BY ROLLUP, GROUPING(), GROUPING_ID(). See the example of a GROUP BY query from the online documentation below:

SELECT Country
,Region
,SUM(Sales) AS TotalSales
FROM Sales
GROUP BY ROLLUP(Country, Region)
ORDER BY Country
,Region

Intelligent workload management

The new workload importance feature in Azure SQL Data Warehouse enables prioritization over workloads that need to be executed on the data warehouse system. Workload importance provides administrators the ability to prioritize workloads based on business requirements (e.g., executive dashboard queries, ELT executions).

Workload classification

It all starts with workload classification. SQL Data Warehouse classifies a request based on a set of criteria, which administrators can define. In the absence of a matching classifier, the default classifier is chosen. SQL Data Warehouse supports classification at different levels including at the SQL query level, a database user, database role, Azure Active Directory login, or Azure Active Directory group, and maps the request to a system defined workload group classification.

Workload importance

Each workload classification can be assigned one of five levels of importance: low, below_normal, normal, above_normal, and high. Access to resources during compilation, lock acquisition, and execution are prioritized based on the associated importance of a request.

The diagram below illustrates the workload classification and importance function:

ADW_GIF2_v3H

Classifying requests with importance

Classifying requests is done with the new CREATE WORKLOAD CLASSIFIER syntax. Below is an example that maps the login for the ExecutiveReports role to ABOVE_NORMAL importance and the AdhocUsers role to BELOW_NORMAL importance. With this configuration, members of the ExecutiveReports role have their queries complete sooner because they get access to resources before members of the AdhocUsers role.

CREATE WORKLOAD CLASSIFIER ExecReportsClassifier
   WITH (WORKLOAD_GROUP = 'mediumrc'
        ,MEMBERNAME     = 'ExecutiveReports'
        ,IMPORTANCE     =  above_normal);
 
CREATE WORKLOAD CLASSIFIER AdhocClassifier
    WITH (WORKLOAD_GROUP = 'smallrc'
         ,MEMBERNAME     = 'AdhocUsers'
         ,IMPORTANCE     =  below_normal);

For more information on workload importance, refer to the classification importance and CREATE WORKLOAD CLASSIFIER documents.

Unmatched security and privacy

When using a data warehouse, customers often have questions regarding security and privacy. As illustrated by Donald Farmer, a well-respected thought leader in the analytics space, Azure SQL Data Warehouse has the most advanced security and privacy features in the market. This wasn’t achieved by chance. In fact, SQL Server, the core technology of SQL Data Warehouse, has been the least vulnerable database over the last eight years in the NIST vulnerabilities database.

One of our newest security and privacy features in SQL Data Warehouse is Data Discovery and Classification. This feature enables automated discovery of columns potentially containing sensitive data, recommends metadata tags to associate with the columns, and can persistently attach those tags to your tables.

These tags will appear in the Audit log for queries against sensitive data, in addition to being included alongside the query results for clients which support this feature.

The Azure SQL Database Data Discovery & Classification article walks you through enabling the feature via the Azure portal. While the article was written for Azure SQL Database, it is now equally applicable to SQL Data Warehouse.

Next steps

Azure is the best place for data analytics

Azure continues to be the best cloud for analytics. Learn more why analytics in Azure is simply unmatched.

Bitnami Apache Airflow Multi-Tier now available in Azure Marketplace

$
0
0

A few months ago, we released a blog post that provided guidance on how to deploy Apache Airflow on Azure. The template in the blog provided a good quick start solution for anyone looking to quickly run and deploy Apache Airflow on Azure in sequential executor mode for testing and proof of concept study. However, the template was not designed for enterprise production deployments and required expert knowledge of Azure app services and container deployments to run it in Celery Executor mode. This is where we partnered with Bitnami to help simplify production grade deployments of Airflow on Azure for customers.

We are excited to announce that the Bitnami Apache Airflow Multi-Tier solution and the Apache Airflow Container are now available for customers in the Azure Marketplace. Bitnami Apache Airflow Multi-Tier template provides a 1-click solution for customers looking to deploy Apache Airflow for production use cases. To see how easy it is to launch and start using them, check out the short video tutorial.

We are proud to say that the main committers to the Apache Airflow project have also tested this application to ensure that it was performed to the standards that they would expect.

Apache Airflow PMC Member and Core Committer Kaxil Naik said, “I am excited to see that Bitnami provided an Airflow Multi-Tier in the Azure Marketplace. Bitnami has removed the complexity of deploying the application for data scientists and data engineers, so they can focus on building the actual workflows or DAGs instead. Now, data scientists can create a cluster for themselves within about 20 minutes. They no longer need to wait for DevOps or a data engineer to provision one for them.”

What is Apache Airflow?

Apache Airflow is a popular open source workflow management tool used in orchestrating ETL pipelines, machine learning workflows, and many other creative use cases. It provides a scalable, distributed architecture that makes it simple to author, track and monitor workflows.

Users of Airflow create Directed Acyclic Graph (DAG) files to define the processes and tasks that must be executed, in what order, and their relationships and dependencies. DAG files are synchronized across nodes and the user will then leverage the UI or automation to schedule, execute and monitor their workflow.

Introduction to Bitnami’s Apache Airflow Multi-tier architecture

Bitnami Apache Airflow has a multi-tier distributed architecture that uses Celery Executor, which is recommended by Apache Airflow for production environments.

It is comprised of several synchronized nodes:

  • Web server (UI)
  • Scheduler
  • Workers

It includes two managed Azure services:

  • Azure Database for PostgreSQL
  • Azure Cache for Redis

All nodes have a shared volume to synchronize DAG files.

DAG files are stored in a directory of the node. This directory is an external volume mounted in the same location in all nodes (both workers, scheduler, and web server). Since it is a shared volume, the files are automatically synchronized between servers. Add, modify or delete DAG files from this shared volume and the entire Airflow system will be updated.

You can also use DAGs from a GitHub repository. By using Git, you won’t have to access any of the Airflow nodes and you can just push the changes through the Git repository instead.

To automatically synchronize DAG files with Airflow, please refer to Bitnami’s documentation.

Bitnami’s secret sauce - Packaging for production use

Bitnami specializes in packaging multi-tier applications to work right out of the box leveraging the managed Azure services like Azure Database for PostgreSQL.

When packaging the Apache Airflow Multi-Tier solution, Bitnami added a few optimizations to ensure that it would work for production needs.

  • Pre-packaged to leverage the most popular deployment strategies. For example, using PostgreSQL as the relational metadata store and the Celery executor.
  • Role-based access control is enabled by default to secure access to the UI.
  • The cache and the metadata store are Azure-native PaaS services that leverage the additional benefits those services offer, such as data redundancy and retention/recovery options as well as allowing Airflow to scale out to large jobs.
  • All communication between Airflow nodes and the PostgreSQL database service is secured using SSL.

To learn more, join Azure, Apache Airflow, and Bitnami for a webinar on Wednesday, May 1st at 11:00 am PST. Register now.

Get Started with Apache Airflow Multi-Tier Certified by Bitnami today!

How to stay informed about Azure service issues

$
0
0

Azure Service Health helps you stay informed and take action when Azure service issues like outages and planned maintenance affect you. It provides you with a personalized dashboard that can help you understand issues that may be impacting resources in your Azure subscriptions.

For any event, you can get guidance and support, share details with your colleagues, and receive issue updates. Most importantly, you can configure customizable alerts to automatically notify you of service issues, planned maintenance, and health advisories.

We’ve posted a new video series to help you learn how to use Azure Service Health and ensure you stay on top of service issues. You’ll find out how to:

Watch the first video now:

Set up your Azure Service Health alerts today by visiting Azure Service Health in the Azure portal.

For more in-depth guidance, visit the Azure Service Health documentation. Let us know if you have a suggestion for Service Health by submitting an idea via this page or by sending us an email at servicehealth@microsoft.com.


How do teams work together on an automated machine learning project?

$
0
0

How do teams work together on an automated machine learning project?

When it comes to executing a machine learning project in an organization, data scientists, project managers, and business leads need to work together to deploy the best models to meet specific business objectives. A central objective of this step is to identify the key business variables that the analysis needs to predict. We refer to these variables as the model targets, and we use the metrics associated with them to determine the success of the project.

In this use case, available to the public on GitHub, we’ll see how a data scientist, project manager, and business lead at a retail grocer can leverage automated machine learning and Azure Machine Learning service to reduce product overstock. Azure Machine Learning service is a cloud service that you use to train, deploy, automate, and manage machine learning models, all at the broad scale that the cloud provides. Automated machine learning within Azure Machine Learning service is the process of taking training data with a defined target feature, and iterating through combinations of algorithms and feature selections to automatically select the best model for your data based on the training scores.

Excess stock quickly becomes a liquidity problem, as it is not converted back to cash unless margins are reduced by means of discounts and promotions or, even worse, when it accumulates to be sent to other channels such as outlets, delaying its sale. Identifying in advance which products will not have the level of rotation they expect and controlling replenishment with stock cover that is aligned with sales forecasts are key factors in helping retailers achieve ROI on their investments. Let’s see how the team goes about solving this problem and how automated machine learning enables the democratization of artificial intelligence across the company.

Identify the right business objective for the company

Strong sales and profits are the result of having the right product mix and level of inventory. Achieving this ideal mix requires having current and accurate inventory information. Manual processes not only take time, causing delays in producing current and accurate inventory information, but also increase the likelihood of errors. These delays and errors are likely to cause lost revenue due to inventory overstocks, understocks, and out-of-stocks.

Overstock inventory can also take valuable warehouse space and tie up cash that ought to be used to purchase new inventory. But selling it in liquidation mode can cause its own set of problems, such as tarnishing your reputation and cannibalizing sales of other current products.

The project manager, being the bridge between data scientists and business operations, reaches out to the business lead to discuss the possibilities of using some of their internal and historical sales to solve their overstock inventory problem. The project manager and the business lead define project goals by asking and refining tangible questions that are relevant for the business objective.

There are two main tasks addressed in this stage:

  • Define objectives: The project manager and the business lead need to identify the business problems and, most importantly, formulate questions that define the business goals that the data science techniques can target.
  • Identify data sources: The project manager and data scientist need to find relevant data that helps answer the questions that define the objectives of the project.

Look for the right data and pipeline

It all starts with data. The project manager and the data scientist need to identify data sources that contain known examples of answers to the business problem. They look for the following types of data:

  • Data that is relevant to the question. Do they have measures of the target and features that are related to the target?
  • Data that is an accurate measure of their model target and the features of interest.

There are three main tasks that the data scientist needs to address in this stage:

  1. Ingest the data into the target analytics environment
  2. Explore the data to determine if the data quality is adequate to answer the question
  3. Set up a data pipeline to score new or regularly refreshed data

After setting up the process to move the data from the source locations to the target locations where it’s possible to run analytics operations, the data scientist starts working on raw data to produce a clean, high-quality data set whose relationship to the target variables is understood. Before training machine learning models, the data scientist needs to develop a sound understanding of the data and create a data summarization and visualization to audit the quality of the data and provide the information needed to process the data before it's ready for modeling.

Finally, the data scientist is also in charge of developing a solution architecture of the data pipeline that refreshes and scores the data regularly.

Forecast orange juice sales with automated machine learning

The data scientist and project manager decide to use automated machine learning for a few reasons: automated machine learning empowers customers, with or without data science expertise, to identify an end-to-end machine learning pipeline for any problem, achieving higher accuracy while spending far less of their time. And it also enables a significantly larger number of experiments to be run, resulting in faster iteration toward production-ready intelligent experiences.

Let’s look at how their process using automated machine learning for orange juice sales forecasting delivers on these benefits.

After agreeing on the business objective and what type of internal and historical data should be used to meet that objective, the data scientist creates a workspace. This workspace is the top-level resource for the service and provides data scientists with a centralized place to work with all the artifacts they need to create. When a workspace is created in an AzureML service, the following Azure resources are added automatically (if they are regionally available):

  • Azure Container Registry
  • Azure Storage
  • Azure Application Insights
  • Azure Key Vault

To run automated machine learning, the data scientist also needs to create an Experiment. An Experiment is a named object in a workspace that represents a predictive task, the output of which is a trained model and a set of evaluation metrics for the model.

The data scientist is now ready to load the historical orange juice sales data and loads the CSV file into a plain pandas DataFrame. The time column in the CSV is called WeekStarting, so it will be specially parsed into the datetime type.

Each row in the DataFrame holds a quantity of weekly sales for an orange juice brand at a single store. The data also includes the sales price, a flag indicating if the orange juice brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also includes the logarithm of the sales quantity.

The task is now to build a time series model for the Quantity column. It’s important to note that this data set is comprised of many individual time series; one for each unique combination of Store and Brand. To distinguish the individual time series, we thus define the grain—the columns whose values determine the boundaries between time series.

After splitting the data into a training and a testing set for later forecast evaluation, the data scientist starts working on the modeling step for forecasting tasks, and automated machine learning uses pre-processing and estimation steps that are specific to time series. Automated machine learning will undertake the following pre-processing steps:

  • Detect the time series sample frequency (e.g., hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span.
  • Impute missing values in the target via forward-fill and feature columns using median column values.
  • Create grain-based features to enable fixed effects across different series.
  • Create time-based features to assist in learning seasonal patterns.
  • Encode categorical variables to numeric quantities.

The AutoMLConfig object defines the settings and data for an automated machine learning training job. Below is a summary of automated machine learning configuration parameters that were used for training the orange juice sales forecasting model:

Summary of automated machine learning configuration parameters.

Visit GitHub for more information on forecasting. Each iteration runs within an experiment and stores serialized pipelines from the automated machine learning iterations until they retrieve the pipeline with the best performance on the validation data set.

Once the evaluation has been performed, the data scientist, project manager, and business lead meet again to review the forecasting results. It’s the project manager and business lead’s job to make sense of the outputs and choose practical steps based on those results. The business lead needs to confirm that the best model and pipeline meet the business objective and that the machine learning solution answers the questions with acceptable accuracy to deploy the system to production for use by their internal sales forecasting application.

Microsoft invests in Automated Machine Learning

Automated machine learning is based on a breakthrough from the Microsoft Research division. The approach combines ideas from collaborative filtering and Bayesian optimization to search an enormous space of possible machine learning pipelines intelligently and efficiently. It’s essentially a recommender system for machine learning pipelines. Similar to how streaming services recommend movies for users, automated machine learning recommends machine learning pipelines for data sets.

It’s now offered as part of the Azure Machine Learning service. As you’ve seen here, Automated machine learning empowers customers, with or without data science expertise, to identify an end-to-end machine learning pipeline for any problem and save time while increasing accuracy. It also enables a larger number of experiments to be run and faster iterations. How could automated machine learning benefit your organization? How could your team work more closely on using machine learning to meet your business objectives?

 

Resources

How to accelerate DevOps with Machine Learning lifecycle management

$
0
0

DevOps is the union of people, processes, and products to enable the continuous delivery of value to end users. DevOps for machine learning is about bringing the lifecycle management of DevOps to Machine Learning. Utilizing Machine Learning, DevOps can easily manage, monitor, and version models while simplifying workflows and the collaboration process.

Effectively managing the Machine Learning lifecycle is critical for DevOps’ success. And the first piece to machine learning lifecycle management is building your machine learning pipeline(s).

What is a Machine Learning Pipeline? 

DevOps for Machine Learning includes data preparation, experimentation, model training, model management, deployment, and monitoring while also enhancing governance, repeatability, and collaboration throughout the model development process. Pipelines allow for the modularization of phases into discrete steps and provide a mechanism for automating, sharing, and reproducing models and ML assets. They create and manage workflows that stitch together machine learning phases. Essentially, pipelines allow you to optimize your workflow with simplicity, speed, portability, and reusability.

There are four steps involved in deploying machine learning that data scientists, engineers and IT experts collaborate on:

  1. Data Ingestion and Preparation
  2. Model Training and Retraining
  3. Model Evaluation
  4. Deployment

fig1

Together, these steps make up the Machine Learning pipeline. Below is an excerpt from documentation on building machine pipelines with Azure Machine Learning service, which explains it well.

“Using distinct steps makes it possible to rerun only the steps you need, as you tweak and test your workflow. A step is a computational unit in the pipeline. As shown in the preceding diagram, the task of preparing data can involve many steps. These include, but aren't limited to, normalization, transformation, validation, and featurization. Data sources and intermediate data are reused across the pipeline, which saves compute time and resources.”

4 benefits of accelerating Machine Learning pipelines for DevOps

     

    1. Collaborate easily across teams

    • Data scientists, data engineers, and IT professionals using machine learning pipelines need to collaborate on every step involved in the machine learning lifecycle: from data prep to deployment.
    • Azure Machine Learning service workspace is designed to make the pipelines you create visible to the members of your team. You can use Python to create your machine learning pipelines and interact with them in Jupyter notebooks, or in another preferred integrated development environment.

    2. Simplify workflows

    • Data prep and modeling can last days or weeks, taking time and attention away from other business objectives.
    • The Azure Machine Learning SDK offers imperative constructs for sequencing and parallelizing the steps in your pipelines when no data dependency is present. You can also templatize pipelines for specific scenarios and deploy them to a REST endpoint, so you can schedule batch-scoring or retraining jobs. You only need to rerun the steps you need, as you tweak and test your workflow when you rerun a pipeline.

    3. Centralized Management

    • Tracking models and their version histories is a hurdle many DevOps teams face when building and maintaining their machine learning pipelines.
    • The Azure Machine Learning service model registry tracks models, their version histories, their lineage and artifacts. Once the model is in production, the Application Insights service collects both application and model telemetry that allows the model to be monitored in production for operational and model correctness. The data captured during inferencing is presented back to the data scientists and this information can be used to determine model performance, data drift, and model decay, as well as the tools to train, manage, and deploy machine learning experiments and web services in one central view.
    • The Azure Machine Learning SDK also allows you to submit and track individual pipeline runs. You can explicitly name and version your data sources, inputs, and outputs instead of manually tracking data and result paths as you iterate. You can also manage scripts and data separately for increased productivity. For each step in your pipeline. Azure coordinates between the various compute targets you use, so that your intermediate data can be shared with the downstream compute targets easily. You can track the metrics for your pipeline experiments directly in the Azure portal.

    4. Track your experiments easily

     

    • DevOps capabilities for machine learning further improve productivity by enabling experiment tracking and management of models deployed in the cloud and on the edge. All these capabilities can be accessed from any Python environment running anywhere, including data scientists’ workstations. The data scientist can compare runs, and then select the “best” model for the problem statement.
    • The Azure Machine Learning workspace keeps a list of compute targets that you can use to train your model. It also keeps a history of the training runs, including logs, metrics, output, and a snapshot of your scripts. Create multiple workspaces or common workspaces to be shared by multiple people.

       

        Conclusion

        As you can see, DevOps for Machine Learning can be streamlined across the ML pipeline with more visibility into training, experiment metrics, and model versions. Azure Machine Learning service, seamlessly integrates with Azure services to provide end-to-end capabilities for the entire Machine Learning lifecycle, making it simpler and faster than ever.

        This is part two of a four-part series on the pillars of Azure Machine Learning services. Check out part one if you haven’t already, and be sure to look out for our next blog, where we’ll be talking about ML at scale.

        Learn More

        Visit our product site to learn more about the Azure Machine Learning service, and get started with a free trial of Azure Machine Learning service.

        April Security Release: Patches available for Azure DevOps Server 2019, TFS 2018.3.2, TFS 2018.1.2, TFS 2017.3.1, and the release of TFS 2015.4.2

        $
        0
        0

        For the April security release, we are releasing fixes for vulnerabilities that impact Azure DevOps Server 2019, TFS 2018, TFS 2017, and TFS 2015. These vulnerabilities were found through our Azure DevOps Bounty Program. Thanks to everyone who has been participating in this program.

        CVE-2019-0857: spoofing vulnerability in the Wiki

        CVE-2019-0866: remote code execution vulnerability in Pipelines

        CVE-2019-0867: cross site scripting (XSS) vulnerability in Pipelines

        CVE-2019-0868: cross site scripting (XSS) vulnerability in Pipelines

        CVE-2019-0869: HTML injection vulnerability in Pipelines

        CVE-2019-0870: cross site scripting (XSS) vulnerability in Pipelines

        CVE-2019-0871: cross site scripting (XSS) vulnerability in Pipelines

        CVE-2019-0874: cross site scripting (XSS) vulnerability in Pipelines

        CVE-2019-0875: elevation of privilege vulnerability in Boards

        Azure DevOps Server 2019 Patch 1

        If you have Azure DevOps Server 2019, you should install Azure DevOps Server 2019 Patch 1.

        Verifying Installation

        To verify if you have this update installed, you can check the version of the following file: [INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.Server.WebAccess.VersionControl.dll. Azure DevOps Server 2019 is installed to c:Program FilesAzure DevOps Server 2019 by default.

        After installing Azure DevOps Server 2019 Patch 1, the version will be 17.143.28804.3.

        TFS 2018 Update 3.2 Patch 3

        If you have TFS 2018 Update 2 or Update 3, you should first update to TFS 2018 Update 3.2. Once on Update 3.2, install TFS 2018 Update 3.2 Patch 3.

        Verifying Installation

        To verify if you have this update installed, you can check the version of the following file: [TFS_INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.WorkItemTracking.Web.dll. TFS 2018 is installed to c:Program FilesMicrosoft Team Foundation Server 2018 by default.

        After installing TFS 2018 Update 3.2 Patch 3, the version will be 16.131.28728.4.

        TFS 2018 Update 1.2 Patch 3

        If you have TFS 2018 RTW or Update 1, you should first update to TFS 2018 Update 1.2. Once on Update 1.2, install TFS 2018 Update 1.2 Patch 3.

        Verifying Installation

        To verify if you have this update installed, you can check the version of the following file: [TFS_INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.Server.WebAccess.Admin.dll. TFS 2018 is installed to c:Program FilesMicrosoft Team Foundation Server 2018 by default.

        After installing TFS 2018 Update 1.2 Patch 3, the version will be 16.122.28801.2.

        TFS 2017 Update 3.1 Patch 4

        If you have TFS 2017, you should first update to TFS 2017 Update 3.1. Once on Update 3.1, install TFS 2017 Update 3.1 Patch 4.

        Verifying Installation

        To verify if you have a patch installed, you can check the version of the following file: [TFS_INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.Server.WebAccess.Admin.dll. TFS 2017 is installed to c:Program FilesMicrosoft Team Foundation Server 15.0 by default.

        After installing TFS 2017 Update 3.1 Patch 4, the version will be 15.117.28728.0.

        TFS 2015 Update 4.2

        If you are on TFS 2015, you should upgrade to TFS 2015 Update 4.2 with the ISO or Web Install. This is a full upgrade and will require you to run the Upgrade Wizard.

        The post April Security Release: Patches available for Azure DevOps Server 2019, TFS 2018.3.2, TFS 2018.1.2, TFS 2017.3.1, and the release of TFS 2015.4.2 appeared first on Azure DevOps Blog.

        .NET Core April 2019 Updates – 2.1.10 and 2.2.4

        $
        0
        0

        Today, we are releasing the .NET Core April 2019 Update. These updates contain security and reliability fixes. See the individual release notes for details on included fixes.

        Security

        Microsoft Security Advisory CVE-2019-0815: ASP.NET Core Denial of Service Vulnerability

        A denial of service vulnerability exists in ASP.NET Core 2.2 where, if an application is hosted on Internet Information Server (IIS) a remote unauthenticated attacker can use a specially crafted request to cause a Denial of Service.

        The vulnerability affects any Microsoft ASP.NET Core 2.2 applications if it is hosted on an IIS server running AspNetCoreModuleV2 (ANCM) prior to and including 12.2.19024.2. The security update addresses the vulnerability by ensuring the IIS worker process does not crash in response to specially crafted requests.

        Getting the Update

        The latest .NET Core updates are available on the .NET Core download page.

        See the .NET Core release notes ( 2.1.10 | 2.2.4 ) for details on the release including a issues fixed and affected packages.

        Docker Images

        .NET Docker images have been updated for today’s release. The following repos have been updated.

        microsoft/dotnet
        microsoft/dotnet-samples
        microsoft/aspnetcore

        Note: Look at the “Tags” view in each repository to see the updated Docker image tags.

        Note: You must re-pull base images in order to get updates. The Docker client does not pull updates automatically.

        Azure App Services deployment

        Deployment of these updates Azure App Services has been scheduled and they estimate the deployment will be complete by Apr 23, 2019.

        The post .NET Core April 2019 Updates – 2.1.10 and 2.2.4 appeared first on .NET Blog.

        Microsoft at SAP Sapphire NOW 2019: A trusted path to cloud innovation

        $
        0
        0

        In a few weeks, over 22,000 people from around the globe will converge in Orlando, Florida from May 7-9, 2019 for the SAP Sapphire NOW and ASUG Annual Conference. Each year, the event brings together thought leaders across industries to find innovative ways to solve common challenges, unlock new opportunities, and take advantage of emerging technologies that are changing the business landscape as we know it. This year, Microsoft has elevated its presence to the next level with engaging in-booth experiences and informative sessions that will educate, intrigue, and inspire attendees as they take the next step in their digital transformation journey.

        Modernize your SAP landscapes

        While running SAP on-premises was once business as usual, it is quickly becoming obsolete for businesses looking to compete and win. With the power of the cloud, enterprises have real-time data with intelligent insights from machine learning and artificial intelligence at their fingertips, can spin up a dev-test environment or an application server in minutes instead of hours, and back-up a virtual machine in a few mouse clicks.

        At SAP SAPPHIRE NOW, you’ll have the opportunity to get a better understanding on the business value of moving your SAP applications to Azure:

        • On Tuesday, May 7, 2019 from 12:00 PM – 12:40 PM, we will host a session on “Innovating with SAP HANA on Microsoft Azure.” The session will cover how SAP customers are accelerating innovation velocity and saving costs for high-performance SAP HANA applications by moving to Azure.
        • On Tuesday, May 7, 2019 from 3:00 PM – 3:20 PM, we will host a session on “Microsoft’s journey to SAP S/4 HANA on Azure.” In this session you’ll learn how Microsoft migrated to Azure and is now leveraging it to transform its existing SAP landscape and starts migrating to S/4HANA.
        • On Wednesday, May 8, 2019 from 11:30 AM – 11: 50 AM, we will host a session on “Lessons learned from migrating SAP applications to the cloud with Microsoft Azure.” The session will share the lessons Microsoft learned during migration and share best practices that will help you learn how you can transform your existing SAP landscape and start migrating to Azure. To learn more about Microsoft’s journey to running SAP on Azure, check out our IT showcase story: SAP on Azure—your trusted path to innovation in the cloud.
        • Visit the Microsoft booth, #729, for one of our in-booth theatre sessions on topics like “Optimizing your SAP landscapes in Azure” and “SAP on Azure deployment journey and lessons learned,” or get hands-on with Azure at one of our in-booth demo pods.

        Explore IoT, AI, and machine learning

        Every organization is challenged with doing things faster, cheaper, and smarter to keep up with the ever-evolving pace of innovation. To stay agile in a competitive landscape, businesses need to start thinking about how to leverage emerging technology advancements like IoT solutions and artificial intelligence to better serve customers, build more innovative solutions, and obtain a 360-degree view of the business.

        At SAP SAPPHIRE NOW, you’ll have the chance to talk with solution experts from Microsoft around creative ways to leverage technology to solve your most challenging business problems:

        • On Tuesday, May 7, 2019 from 2:00 PM – 2:20 PM, we will host a session, “Harness the power of IoT Data across Intelligent Edge and Intelligent Cloud.” In this session, you’ll learn how you can take advantage of innovations in IoT technology at the edge and in the cloud with SAP business processes with the power of Microsoft Azure IoT to achieve transformative innovation for your business.
        • Stop by booth #729 to experience our Azure Data Services and Analytics demo to learn how you can connect data from multiple inputs and applications to provide a unified view of your business. You can also learn more about how IoT solutions can help you take a step closer to digital transformation by experiencing our Azure IoT demo.

        Learn about cloud migration from our trusted partners

        There are different paths to migrate to SAP HANA and Azure, depending on your business needs. Microsoft’s SAP on Azure partners can work with you to determine the best way to migrate your SAP applications to the cloud.

        At SAP SAPPHIRE NOW, you’ll find multiple opportunities to connect with partners:

        • Join a partner-led session at our in-booth theatre. We’ll have partners from organizations like SAP and Accenture to learn how running your SAP landscapes in the cloud can provide your business with more agility, security, and reduced costs.
        • After the show-floor dies down, we encourage you to engage with Microsoft and our partners at various co-sponsored, partner-led events throughout the week.
        • Also, stop by our booth (#729) to speak with many of our leading partner organizations to learn about the services they provide to help you on your journey to the running SAP on Azure.

        Discover business transformation

        Look for Microsoft at SAP SAPPHIRE NOW 2019 and see for yourself why the leading enterprises across industries bet their businesses on the technology that Microsoft and SAP provide for a first-and-best pathway to running SAP applications in the cloud.

        Sign up for live updates at our dedicated SAP SAPPHIRE NOW 2019 event page.

        How to stay on top of Azure best practices

        What’s new in Azure DevOps Sprint 149?

        $
        0
        0

        Sprint 149 has just finished rolling out to all organisations today and you can check out all the cool features in the release notes. Here is just a snapshot of some of the features that you can start using today.

        Navigate to Azure Boards work items directly from mentions in any GitHub comment

        Want to mention a work item in a GitHub comment? Well, now you can. When you mention a work item within the comment of an issue, pull request, or commit in GitHub using the AB#{work item ID} syntax, those mentions will become hyperlinks that you can click on to navigate directly to the mentioned work item.

        This doesn’t create a formal link that clutters up the work item in Azure Boards for every related conversation, but instead gives your team a way to provide a little more information about work items while discussing code or a customer-reported issue. Check out the Azure Boards GitHub integration documentation for more information.

        Azure Boards GitHub Enterprise support

        Teams can now connect Azure Boards projects to repositories hosted in GitHub Enterprise Server instances. By connecting your Azure DevOps Server project with your GitHub Enterprise Server repositories, you support linking between GitHub commits and pull requests to work items. You can use GitHub Enterprise for software development while using Azure Boards to plan and track your work. Follow the steps in the documentation to get started.

        Private projects now get 60 minutes of run time per pipeline job

        In this sprint, a free account (that is, one which had not purchased parallel jobs) can now run a job for up to 60 minutes at a time (instead of the 30 mins), with up to 1,800 minutes per month. If you need to run your pipeline for more than 60 minutes, you can pay for additional capacity per parallel job or run in a self-hosted agent. Self-hosted agents don’t have a job length restrictions. Learn more about pricing here.

        GitHub comments trigger optimizations

        We improved the experience for teams who use GitHub pull request comments to trigger builds. Usually for security, these teams don’t want to automatically build pull requests. Instead, they want a team member to review the pull request and once it’s deemed safe, trigger the build with a pull request comment. Adding this new setting, helps with this, whilst still allowing automatic pull request builds only for team members.

        These are just the tip of the iceberg, and there is plenty more that we’ve released in Sprint 149. Check out the full list of features for this sprint in the release notes.

        The post What’s new in Azure DevOps Sprint 149? appeared first on Azure DevOps Blog.


        New features for extension authors in Visual Studio 2019 version 16.1

        $
        0
        0

        Earlier this week, we released Visual Studio 2019 version 16.1 Preview 1 (see release notes). It’s the first preview of the first update to Visual Studio 2019. If you’re not already set up to get preview releases, then please do that now. The preview channel installs side-by-side with the release channel and they don’t interfere with each other. I highly recommend all extension authors install the preview.

        Got the 16.1 preview installed now then? That’s great. Here are some features in it you might find interesting.

        Shared Project support

        There are several reasons why extension authors sometimes must split an extension into multiple projects to support the various versions of Visual Studio. If you’re using an API that did not exist for an earlier version of Visual Studio or if there are breaking changes between the versions you want to support, now there’s a simple easier to split your extension.

        With Visual Studio 2019 version 16.1 Preview 1, we’ve added support for referencing Shared Projects from VSIX projects in the same solution.

        You can place common code in a separate Shared Project that compiles directly into the VSIX projects at build time. The only code that then exists in the VSIX projects themselves, is code that is specific to the Visual Studio version it supports. The result is two separate VSIXs that target their own specific Visual Studio version range and share most of the code from the Shared Project. Checkout the code for the Extension Manager extension that does exactly this.

        No more need for .resx file

        When adding commands, menus etc. using a VSCT file, you must specify a .resx file marked with the MergeWithCTO MSBuild property. The templates in Visual Studio takes care of adding that file and it also adds a .ico file referenced by the .resx file. However, the need for a .resx is an implementation detail and most extensions don’t need to use it.

        In an effort to make VSIX project simpler, the requirement for the .resx/.ico files have been removed when using the latest Microsoft.VSSDK.BuildTools NuGet package version 16.0 or newer.

        Behind the scenes, the NuGet package provides an empty .resx to compile with the MergeWithCTO property unless you registered your own in the project.

        Per-monitor awareness

        Additional per-monitor awareness support is being enabled in 16.1 with .NET Framework 4.8 installed. Windows Forms UI now better handle DPI scaling across monitors. However, this may cause UI issues in your extension after installing .NET Framework 4.8.

        When using Windows Forms in an extension, you can match the Visual Studio 2017 scaling behaviors by wrapping your form or control creation in a DpiAwareness.EnterDpiScope call.

        using (DpiAwareness.EnterDpiScope(DpiAwarenessContext.SystemAware))
        using (var form = new MyForm())
        {
            form.ShowDialog();
        }

        All you need is to add a reference to the Microsoft.VisualStudio.DpiAwareness NuGet package. Use this package in extensions targeting earlier versions of Visual Studio too but be aware that it will only take effect when running in 16.1 and newer. So, it is safe to use in extensions spanning multiple versions of Visual Studio.

        To make it easier to simulate multiple monitors running with different DPI scaling, an engineer from the Visual Studio IDE team built a handy little tool and put it on GitHub. The team used this tool while they were adding support for per-monitor awareness, so you may find it helpful too.

        Read more about how to deal with per-monitor awareness for extenders.

        Synchronous auto-load disabled

        18 months ago, we sent an email to extension partners announcing the deprecation of synchronous auto-loading of extension packages. A year ago, we followed up with a blog post with more details that outlined that synchronously auto-loaded package would be unsupported in a future version of Visual Studio. That version is 16.1.

        There are great samples on how to migrate to AsyncPackage with background load enabled, and most extensions today have already made the transition. If you haven’t already, now is a good time to do that before 16.1 goes out of preview.

        New SDK meta package

        The meta package Microsoft.VisualStudio.SDK is a single NuGet package that references all the various Visual Studio packages that make up the SDK. The cool thing about the meta package is that you have access to all the interfaces and services. In addition, you also avoid issues with mismatched package versions.

        When we released Visual Studio 2019 (16.0), the VSIX Project template referenced the 15.9 version of the SDK meta package. That was because the 16.0 version was still under development. All the individual packages had to be published to NuGet before we could take dependency on them from the meta package.

        The good news is that now we finally have the 16.0 version ready. You should use it if the lowest version of Visual Studio if your extension supports 16.0. and you can read more about extension versioning here.

        The post New features for extension authors in Visual Studio 2019 version 16.1 appeared first on The Visual Studio Blog.

        Top Stories from the Microsoft DevOps Community – 2019.04.12

        $
        0
        0

        I’m back from a few weeks of travelling – a fun mix of conferences and holiday – and I’m happy to be home. I’m particularly excited that I’ll be here in England for the Global Azure Bootcamp in just a few weeks. It’s coming up on April 27, it’s all about Azure and Cloud Computing, and it’s taking place in locations across the world. So now I’ve just got to figure out which location I want to visit! There’s an event near you, too, so maybe I’ll see you there!

        Azure DevOps Server 2019 Install Guide
        If you’re gearing up to install Azure DevOps Server 2019 (the on-premises version of Azure DevOps, formerly Team Foundation Server) then you’ll love this comprehensive walkthrough from Ben Day. It covers everything from installing the operating system to Azure DevOps Server to the build and release agent.

        Node.js + AKS on Azure DevOps
        One of the easiest ways to bootstrap a complete pipeline for a new application – even with sample code – is with Azure DevOps Projects. Emily Freeman walks through setting up a container-based Azure DevOps pipeline for a new node.js project.

        Azure DevOps: Recommended Practices for Secure Pipelines
        It’s critical to keep your Azure DevOps accounts and your pipelines secure. The best start is reading the data protection whitepaper. But Michael Pedersen is collecting additional recommended practices for secure pipelines.

        Open a project from an Azure DevOps repo in Visual Studio 2019
        One of my favorite new features in Visual Studio 2019 is the new Start Window (and, yes, it’s a start window). Abou Conde shows you how it lets you get started with a project in a Git repository hosted in Azure Repos right away.

        Do you want to move to YAML pipelines? Here is how I would do it
        The new YAML-based pipelines are awesome because they let you check in your build configuration right with your code. But they can be daunting for first time users. Matteo Emili introduces the new assistant that lets you combine the power of YAML with the simplicity of the visual designer.

        As always, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure then let me know! I’m @ethomson on Twitter.

        The post Top Stories from the Microsoft DevOps Community – 2019.04.12 appeared first on Azure DevOps Blog.

        Accessibility Insights for the Web and Windows makes accessibility even easier

        $
        0
        0

        Accessibility InsightsI recently stumbled upon https://accessibilityinsights.io. There's both a Chrome/Edge extension and a Windows app, both designed to make it easier to find and fix accessibility issues in your websites and apps.

        The GitHub for the Accessibility Insights extension for the web is at https://github.com/Microsoft/accessibility-insights-web and they have three trains you can get on:

        It builds on top of the Deque Axe core engine with a really fresh UI. The "FastPass" found these issues with my podcast site in seconds - which kind of makes me feel bad, but at least I know what's wrong!

        However, the most impressive visualization in my opinion was the Tab Stop test! See below how it draws clear numbered line segments as you Tab from element. This is a brilliant way to understand exactly how someone without a mouse would move through your site.

        I can easily see what elements are interactive and what's totally inaccessible with a keynote! I can also see if the the tab order is inconsistent with the logical order that's communicated visually.

        Visualized Tab Stops as numbered points on a line segment that moves through the DOM

        After the FastPass and Tab Visualizations, there's an extensive guided assessment that walks you through 22 deeper accessibility areas, each with several sub issues you might run into. As you move through each area, most have Visual Helpers to help you find elements that may have issues.

        Checking for accessible elements on a web site

        After you're done you and export your results as a self-contained HTML file you can check in and then compare with future test results.

        There is also an Accessibility Insights for Windows if I wanted to check, for example, the accessibility of the now open-source Windows Calculator https://github.com/Microsoft/calculator.

        It also supports Tab Stop visualization and is a lot like Spy++ - if you remember that classic developer app. There were no Accessibility issues with Calculator - which makes sense since it ships with Windows and a lot of people worked to make it Accessible.

        Instead I tried to test Notepad2. Here you can see it found two elements that can have keybook focus but have no names. Even cooler, you can click "New Bug" and it will create a new accessibility bug for you in Azure DevOps.

        Test Results for Windows apps being checked for accessibility

        The Windows app is also open source and up at https://github.com/Microsoft/accessibility-insights-windows for you to explore and file issues! There's also excellent developer docs to get you up to speed on the organization of the codebase and how each class and project works.

        You can download both of these free open source Accessibility Tools at https://accessibilityinsights.io and start testing your websites and apps. I have some work to do!


        Sponsor: Seq delivers the diagnostics, dashboarding, and alerting capabilities needed by modern development teams - all on your infrastructure. Download at https://datalust.co/seq.



        © 2018 Scott Hanselman. All rights reserved.
             

        Changes to Coded UI Test in Visual Studio 2019

        $
        0
        0

        With the release of Visual Studio 2019, we announced that it will be the last version of Visual Studio that contains the Coded UI test functionality. I wanted to give a little bit more context on the decision and answer some of the common questions we’ve been getting on this.

        Why we are deprecating Coded UI Test

        Coded UI tests are used for UI-driven functional automation of web apps and desktop apps. Open source UI testing tools such as Selenium and Appium have gained momentum in recent years, have a strong community backing and are now pretty much industry standards. Our Coded UI’s cross-browser testing solution was itself based on Selenium. Additionally, both Selenium and Appium work cross-platform and support multiple programming languages.

        Over the last few years as development teams have become more agile and a faster release cadence has become the norm, testing practices have evolved as well. Test automation focus is shifting from predominantly UI driven testing to more unit testing and API testing. We had blogged about our own test journey here. We see similar experience repeated in our customers using Selenium or Appium instead of Coded UI. As part of that, customers have found it useful to evaluate their test bed and only carry forward only a subset of tests with the new tools – this has helped them migrate efficiently by reducing redundancy and eliminate any tests that were no longer needed/useful.

        Recommendation for alternatives to Coded UI Test

        We’ve been recommending for a while that customers use the open source tools Selenium and Appium tools, therefore the Visual Studio 2019 release marks their final deprecation.

        We recommend using Selenium for testing web-applications and Appium with WinAppDriver for testing desktop (WPF, WinForms, Win32) and UWP apps. For testing Dynamics 365 apps, we recommend using the EasyRepro framework that is built on top of Selenium.

        Support

        Coded UI test in Visual Studio 2019 will continue to be supported for any issues that may arise during the support lifecycle of the product. As outlined in the product lifecycle and servicing documentation, you can continue to use Coded UI and will be fully supported for the next 5 years and an additional 5 years of extended support is also available should you need it. But this is limited to specific bug fixes, no new features will be added.

        Different Visual Studio versions can be installed side by side. This means that developers will be able to continue to use Visual Studio 2019 to maintain any existing Coded UI test assets, while being able to use any new Visual Studio versions when they becomes available in the future for other development needs.

        The same side-by-side installation mechanism allows for CI/CD pipelines to keep running smoothly without any interruptions while you migrate. This is because when Coded UI tests run as part of a CI/CD pipeline in Azure DevOps, tests are run against a particular Visual Studio version installed on the agent or a particular version of test platform. We will continue to support running tests against Visual Studio 2019 or its associated test platform in newer versions of Azure DevOps Server (formerly, TFS) until the support life cycle of Visual Studio 2019 ends. This means that you will not need to maintain two different Azure DevOps Server versions to be able to keep your existing Coded UI tests running while you migrate.

        Migration

        There are no automated migration tools available to move from Coded UI test to Selenium or Appium at this time. We recommend that any new test collateral being built should use the alternatives and plan your replacement of older Coded UI tests so that it is completed before the end of the Visual Studio support lifecycle. As part of this process we recommend that customer re-evaluate their test portfolio to remove tests that are no-longer useful.

        Premier support for Enterprise can be engaged for help with migrating tests to alternatives – they can be reached via email on premdevinfo@microsoft.com

        I hope that this post is helpful in answering any questions you may have. If you have any further queries, please reach out to the team at devops_tools@microsoft.com.

        The post Changes to Coded UI Test in Visual Studio 2019 appeared first on Azure DevOps Blog.

        Blocking ads before they enter your house at the DNS level with pi-hole and a cheap Raspberry Pi

        $
        0
        0
        image

        Lots of folks ask me about Raspberry Pis. How many I have, what I use them for. At last count there's at least 22 Raspberry Pis in use in our house.

        A Pi-hole is a Raspbery Pi appliance that takes the form of an DNS blocker at the network level. You image a Pi, set up your network to use that Pi as a DNS server and maybe white-list a few sites when things don't work.

        I was initially skeptical, but I'm giving it a try. It doesn't process all network traffic, it's a DNS hop on the way out that intercepts DNS requests for known problematic sites and serves back nothing.

        Installation is trivial if you just run unread and untrusted code from the 'net ;)

        curl -sSL https://install.pi-hole.net | bash

        Otherwise, follow their instructions and download the installer, study it, and run it.

        I put my pi-hole installation on the metal, but there's also a very nice Docker Pi-hole setup if you prefer that. You can even go further, if, like me, you have Synology NAS which can also run Docker, which can in turn run a Pi-hole.

        Within the admin interface you can tail the logs for the entire network, which is also amazing to see. You think you know what's talking to the internet from your house - you don't. Everything is logged and listed. After installing the Pi-hole roughly 18% of the DNS queries heading out of my house were blocked. At one point over 23% were blocked. Oy.

        NOTE: If you're using an Amplifi HD or any "clever" router, you'll want to change the setting "Bypass DNS cache" otherwise the Amplifi will still remain the DNS lookup of choice on your network. This setting will also confuse the Pi-hole and you'll end up with just one "client" of the Pi-hole - the router itself.

        For me it's less about advertising - especially on small blogs or news sites I want to support - it's about just obnoxious tracking cookies and JavaScript. I'm going to keep using Pi-hole for a few months and see how it goes. Do be aware that some things WILL break. Could be a kid's iPhone free-to-play game that won't work unless it can download an add, could be your company's VPN. You'll need to log into http://pi.hole/admin (make sure you save your password when you first install, and you can only change it at the SSH command line with "pihole -a -p") and sometimes disable it for a few minutes to test, then whitelist certain domains. I suspect after a few weeks I'll have it nicely dialed in.


        Sponsor: Seq delivers the diagnostics, dashboarding, and alerting capabilities needed by modern development teams - all on your infrastructure. Download at https://datalust.co/seq.


        © 2018 Scott Hanselman. All rights reserved.
             
        Viewing all 5971 articles
        Browse latest View live


        <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>