On March 14th starting at 8AM PT, we’ll be hosting a free Windows Developer Twitch Workshopfor .NET developers working with WPF, WinForms or UWP framework.
The day will be split into three themes across seven sessions:
Productivity: Productivity for existing developers focused on Visual Studio XAML tooling and design pattern framework libraries
Desktop Apps of Tomorrow: Looking forward to how you can get started exploring the latest technology for your desktop application such as .NET Core 3, XAML Islands, MSIX and Windows 10 APIs
Extending Your Skills: Extending your skills with Forms for mobile apps and DevOps for desktop applications CI/CD
For the full schedule of the days sessions see this detailed post on the Visual Studio blog.
We hope you tune in on March 14th and if you miss it, don’t worry, the content will be posted on Visual Studio YouTube channel in about a week.
Today we’d like to announce an upcoming free live streaming workshop on March 14th, 2019 focused on Windows Desktop development for .NET applications using frameworks such as WPF, WinForms and UWP.
The day will be split into three themes:
Productivity: Productivity for existing developers focused on Visual Studio XAML tooling and design pattern framework libraries
Desktop Apps of Tomorrow: Looking forward to how you can get started exploring the latest technology for your desktop application such as .NET Core 3,
Which US city has the worst weather? To answer that question, data analyst Taras Kaduk counted the number of pleasant days in each city and ranked them accordingly. For this analysis, a "pleasant" day is one where the average temperature was in the range 55°F-75°F, the maximum was in the range 60°-90°, the minimum was in the range 40°-70°, and there was no significant rain or snow. (The weather data was provided by the NOAA Global Surface Summary of the Day dataset and downloaded to R with the rnoaa package.)
With those criteria and focusing just on the metro regions with more than 1 million people, the cities with the fewest pleasant days are:
San Juan / Carolina / Caguas, Puerto Rico (hot year-round)
Rochester, NY (cold in the winter, rain in the summer)
Detroit / Warren / Dearborn, MI (cold in the winter, rain in the summer)
You can see the top (bottom?) 25 cities in this list in the elegant chart below (also by Tara Kaduk), which shows each city as a polar bar chart and with one ring for each of the 6 years of data analyzed.
And if you're wondering which cities have the best weather, here's the corresponding chart for the 25 cities with the most pleasant days. San Diego / Carlsbad (CA) tops that list.
You can find the R code behind the analysis and charts in this Github repository. (The polar charts above required a surprisingly small amount of code: it's a simple transformation of a regular bar chart with the ggplot2 coord_polar transformation — quite appropriate given the annual cycle of weather data.) And for the full description of the analysis including some other nice graphical representations of the data, check out the blog post linked below.
It’s been a busy week here on the Azure DevOps team – we’ve been busy putting the finishing touches on Azure DevOps Server 2019 and getting it out the door. Azure DevOps Server is the new name for Team Foundation Server –
Extension authors use visual Studio version ranges to specify what versions of Visual Studio their extensions support. A version range looks like this [14.0, 17.0) and specifies the minimum and maximum version of Visual Studio as well as if the edges are included or excluded.
This blog post was co-authored by David Armour Principal PM Manager, Azure Stack and Tiberiu Radu, Senior Program Manager, Azure Stack.
Foundation of Azure Stack IaaS
Remember back in the virtualization days when you had to pick a host for your virtual machine? Some of my business units could tell by the naming convention the make and manufacturer of the hardware. Using this knowledge, they’d fill up the better gear first, leaving the teams that didn’t know better with the oldest hosts.
Clouds take a different approach. Instead of hosts, VMs are placed into a pool of capacity. The physical infrastructure is abstract. The compute, storage, and networking resources consumed by the VM are defined through software.
Azure Stack is an instance of the Azure cloud that you can run in your own datacenter. Microsoft has taken the experience and technology from running one of the largest clouds in the world to design a solution you can host in your facility. This forms the foundation of Azure Stack’s infrastructure-as-service (IaaS).
Let’s explore some of the characteristics of the Azure Stack infrastructure that allows you to run cloud-native VMs directly in your facility.
Cloud inspired hardware
Microsoft employees can’t just purchase their favorite server and rack it into an Azure datacenter. The only servers that enter an Azure datacenter have been specifically built for Azure. Not only are the servers built for Azure, so are the networking devices, the racks, and the cabling. This extreme standardization allows the Azure team to operate an Azure datacenter with just a handful of employees. Because all the servers are standardized and can be uniformly operated and automated, adding additional capacity to a datacenter doesn’t require hiring more employees to operate them.
Other advantages of standardizing hardware configurations is the standardization leads to expected, repeatable results – not only for Microsoft and Azure, but for its customers. The hardware integration has been validated and is a known recipe. Servers, storage, networking, cabling layout, and more are all well-known and based on these recipes, the ordering, delivery, and integration of new hardware components. Servicing and eventual retirement are repeatable and scalable. The full end-to-end validation of these configurations is done once with quick checks in place when the capacity is delivered and installed.
These principles are applied to Azure Stack solutions as well. The configurations, their capabilities, and validation are all well-known and the result is a repeatable and supportable product. Microsoft, its partners, and most importantly the end customer benefit. While an Azure Stack customer is limited to the defined, partner solutions, they have been built with reasonable flexibility so the customer can choose the specific capabilities or capacities required. Please note, there is one exception – the Azure Stack Development Kit (ASDK) allows you to install Azure Stack on any hardware that meets the hardware requirements. The ASDK is for evaluation purposes and not supported as a production environment.
Microsoft has partnered and co-engineered solutions with a variety of hardware partners or OEMs. The benefit is that Azure Stack can meet you where your existing relationships exist. These relationships may be based on existing hardware purchasing agreements, geographic location, or support capabilities. Keeping in mind the principles of operating a solution in a well-defined manner, Microsoft has set minimum requirements for Azure Stack hardware solutions. Each of our partners can then choose from their portfolio the components, servers, and network switches that best meet your needs. This creates a well-defined variety that continues to be supportable and delivers the overall solution value.
One of the principles we have taken from Microsoft’s experience in the enterprise and from Azure is overall solution resilience. The world of software and hardware is not perfect; things fail – cables go bad, software has bugs, power outages occur, and on and on. While we work to build better software and with our solution partners to continually improve, we must expect that things fail. Azure Stack solutions are not perfect, but have been constructed with the intent to overcome the common points of failure. For example, each copy of tenant/user data is stored on three separate storage devices in three separate servers. The physical network paths are redundant and provide better performance and resiliency to potential failure. The internal software of Azure Stack are services that coordinate across multiple instances. This type of end-to-end architectural design and implementation leads to a better end experience. Combining this approach to infrastructure resilience with the well-known and validated solutions approach described above provides for a better experience for the customer.
When you run your IaaS VMs in Azure Stack you should know they are running on a secure foundation. It turns out that one of the reasons people select Azure Stack is because they have data and/or processes that are either regulated or defined in a contractual agreement. Azure Stack not only gives its owners control of their data and processes, it comes with an infrastructure which is secured by default. In fact, the underlying infrastructure is locked down in a way that neither the owner nor Microsoft can access it. If it ever needs to be accessed because of a support issue, both the owner and Microsoft combine their keys to obtain access to the system and for a limited time.
Azure leads the industry in security compliance, and security compliance is important for Azure Stack as well. In Azure, Microsoft fully manages the technology, people, and processes as well as its compliance responsibilities. Things are different with Azure Stack. While the technology is provided by Microsoft, the people and processes are managed by the operator. To help operators jump-start the certification process, Azure Stack has gone through a set of formal assessments by a third party-independent auditing firm to document how the Azure Stack infrastructure meets the applicable controls from several major compliance standards. The documentation is an assessment of the technology not a certification of Azure Stack due to the standards including several personnel-related and process-related controls, but they help you get started. The technology assessments include the following standards:
CSA Cloud Control Matrix – A comprehensive mapping across multiple standards, including FedRAMP Moderate, ISO27001, HIPAA, HITRUST, ITAR, NIST SP800-53, and others
As noted earlier, Azure Stack is sold as an integrated hardware system, with software pre-installed on the validated hardware. It typically comes in a standard server rack. You choose where your system will be located. You can host it in your data center or perhaps you want to run it in a service provider’s facility.
With the Azure Stack running in your location of choice, you also have a choice of who operates the Azure Stack infrastructure. An Azure Stack operator is responsible for giving access to the Azure Stack, keeping the software and firmware up-to-date, providing the content in the marketplace, monitoring the system health, and diagnosing issues. Azure Stack provides automation, documentation, and training for all of these processes so that someone from your organization can operate Azure Stack. e also provide trained partner experts who can operate your Azure Stack either in their facility or yours.
Here is an overview of your options when you acquire your Azure Stack:
Once you have your Azure Stack up and running and you begin to plan your first IaaS VM deployments, you need to think about these VMs as cloud deployments, not virtualization deployments. IaaS VMs run best when they take advantage of the cloud infrastructure that they are running on. Many times, the way you tune a VM in your cloud infrastructure will be very different than the way you tuned VMs in your traditional virtualization environment. That said, you can always start with what you already have and improve those solutions through modern operations.
A great example of this is the use of multiple disks to get the needed IOps and throughput required of the application. As is the case in Azure, virtual machines placed in Azure Stack have limits applied for their disk activity. This limits the impact of one VM’s activity on another VM – aka noisy neighbor. While these limits are great for IaaS environments, it may take extra work to deploy workloads that get the appropriate resources needed, and in this example, it is IOps.
For optimization of SQL Server deployments, our documentation provides guidance on how to configure storage to obtain the needed performance. In this case, the approach is to attach multiple disks and stripe them to obtain the capacity and performance. When you use managed disks for your VMs, it allows the system to optimize where the physical data gets stored within your Azure Stack. Moving from virtualization environments to IaaS is reasonably straightforward and has its benefits, but requires a little bit of work on your first deployment. You can always use tools like Azure Monitor and the Virtual Machine solutions to better understand your workloads and gain insights on the performance of your VMs. When your VMs are not answering the performance requirements, you can also use the Azure Performance Diagnostics VM Extension for Windows to troubleshoot and identify potential bottlenecks.
The great thing about IaaS, and specifically Azure Stack, is the ability to easily reuse the deployment templates or artifacts to reduce the work for migration of similar workloads. We will cover this more in a future blog post.
Infrastructure purpose built for running cloud-native VMs
Few organizations can claim that they have experience building one the largest cloud infrastructures in the world. When you buy an Azure Stack, you get the benefit of Microsoft’s Azure experience. Microsoft has partnered with the best OEMs to deliver a standardized configuration so that you don’t have to worry about these details. The infrastructure of Azure Stack is purpose-built to get the best for your IaaS VMs – keeping them safe, secure, and performant.
So you've been asked to parse some dates, except the years are two digit years. For example, dates like "12 Jun 30" are ambiguous...or are they?
If "12 Jun 30" is intended to express a birthday, given it's 2019 as the of writing of this post, we can assume it means 1930. But if the input is "12 Jun 18" is that last year, or is that a 101 year old person's birthday?
For example, if this property is set to 2029, the 100-year range is from 1930 to 2029. Therefore, a 2-digit value of 30 is interpreted as 1930, while a 2-digit value of 29 is interpreted as 2029.
The initial value for this property comes out of the DEPTHS of the region and languages portion of the Control Panel. Note way down there in "additional date, time, & regional settings" in the "more settings" and "date" tab, there's a setting that (currently) splits on 1950 and 2049.
If you're writing a server-side app that parses two digit dates you'll want to be conscious and explicit about what behavior you WANT so that you're not surprised.
Setting TwoDigitYearMax sets a 100 year RANGE that your two digit years will be interpreted to be within. You can also just change it on the current thread's current culture's calendar. It's up to you.
For example, this little app:
string dateString = "12 Jun 30"; //from user input
DateTime result;
CultureInfo culture = new CultureInfo("en-US");
DateTime.TryParse(dateString, culture, DateTimeStyles.None, out result);
Console.WriteLine(result.ToLongDateString());
culture.Calendar.TwoDigitYearMax = 2099;
DateTime.TryParse(dateString, culture, DateTimeStyles.None, out result);
Console.WriteLine(result.ToLongDateString());
gives this output:
Thursday, June 12, 1930
Wednesday, June 12, 2030
Note that I've changed TwoDigitYearMax from and moved it up to the 1999-2099 range so "30" is assumed to be 2030, within that 100 year range.
Hope this helps!
Sponsor: Stop wasting time trying to track down the cause of bugs. Sentry.io provides full stack error tracking that lets you monitor and fix problems in real time. If you can program it, we can make it far easier to fix any errors you encounter with it.
ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS) for .NET developers. Using ML.NET, developers can leverage their existing tools and skillsets to develop and infuse custom AI into their applications by creating custom machine learning models for common scenarios like Sentiment Analysis, Recommendation, Image Classification and more!.
Today we’re announcing the release of ML.NET 0.11. (ML.NET 0.1 was released at //Build 2018). This release, and all other remaining releases before the v1.0 release, will focus on the overall stability of the framework, continuing to refine the API, fix bugs, reduce the public API surface, and improve documentation and samples.
Updates in v0.11 timeframe
Added additional ML components to the MLContext catalog, so it’s easier to find the classes and operations to use. Below you can see the experience based on IntelliSense.
Support for text input in TensorFlowTransformer so you can use TensorFlow models for text analysis (in addition to images). For instance, the following code shows ML.NET scoring a TensorFlow model for a ‘sentiment analysis’ scenario:
publicclassTensorFlowSentiment
{
publicstringSentiment_Text;
[VectorType(600)]
publicint[] Features;
[VectorType(2)]
publicfloat[] Prediction;
}
[TensorFlowFact]
publicvoidTensorFlowSentimentClassificationTest()
{
varmlContext=newMLContext(seed: 1, conc: 1);
var data = new[] { new TensorFlowSentiment() { Sentiment_Text = "this film was just brilliant casting location scenery story direction everyone's really suited the part they played and you could just imagine being there robert is an amazing actor and now the same being director father came from the same scottish island as myself so i loved the fact there was a real connection with this film the witty remarks throughout the film were great it was just brilliant so much that i bought the film as soon as it was released for and would recommend it to everyone to watch and the fly fishing was amazing really cried at the end it was so sad and you know what they say if you cry at a film it must have been good and this definitely was also to the two little boy's that played the of norman and paul they were just brilliant children are often left out of the list i think because the stars that play them all grown up are such a big profile for the whole film but these children are amazing and should be praised for what they have done don't you think the whole story was so lovely because it was true and was someone's life after all that was shared with us all" } };
vardataView=mlContext.Data.ReadFromEnumerable(data);
varlookupMap=mlContext.Data.ReadFromTextFile(@"sentiment_model/imdb_word_index.csv",
columns: new[]
{
newTextLoader.Column("Words", DataKind.TX, 0),
newTextLoader.Column("Ids", DataKind.I4, 1),
},
separatorChar: ','
);
varestimator=mlContext.Transforms.Text.TokenizeWords("TokenizedWords", "Sentiment_Text")
.Append(mlContext.Transforms.Conversion.ValueMap(lookupMap, "Words", "Ids", new[] { ("Features", "TokenizedWords") }));
vardataPipe=estimator.Fit(dataView)
.CreatePredictionEngine<TensorFlowSentiment, TensorFlowSentiment>(mlContext);
stringmodelLocation=@"sentiment_model";
vartfEnginePipe=mlContext.Transforms.ScoreTensorFlowModel(modelLocation, new[] { "Prediction/Softmax" }, new[] { "Features" })
.Append(mlContext.Transforms.CopyColumns(("Prediction", "Prediction/Softmax")))
.Fit(dataView)
.CreatePredictionEngine<TensorFlowSentiment, TensorFlowSentiment>(mlContext);
//Predict the sentiment for the sample data varprocessedData=dataPipe.Predict(data[0]);
Array.Resize(refprocessedData.Features, 600);
varprediction=tfEnginePipe.Predict(processedData);
}
You can see additional code example details in this code
ONNX updates: ONNX is an open and iteroperable model format that enables using models trained in one framework (i.e. scikit-learn, TensorFlow, xgboost, etc.) to use in another framework (like ML.NET).
In ML.NET 0.11 Microsoft.ML.ONNX has been renamed to Microsoft.ML.ONNXConverter and Microsoft.ML.ONNXTransorm has been renamed to Microsoft.ML.ONNXTransformer to make the distinction between ONNX conversion and transformation clearer.
Breaking changes in ML.NET 0.11
For your convenience, if you are moving your code from ML.NET v0.10 to v0.11, you can check out the breaking changes list that impacted our samples.
Explore the community samples and share yours!
As part of the ML.NET Samples repo, we also have a special Community Samples page pointing to multiple samples provided by the community. These samples are not maintained by Microsoft, but they are very interesting and cover additional scenarios not covered by us.
Important ML.NET concepts for understanding the new API are introduced here
“How to” guides that show how to use these APIs for a variety of scenarios can be found here
We appreciate your feedback by filing issues with any suggestions or enhancements in the ML.NET GitHub repo to help us shape ML.NET and make .NET a great platform of choice for Machine Learning.
The future of food security and feeding an expanding global population depends upon our ability to increase food production globally—an estimated 70 percent by the year 2050, according to the Food and Agriculture Organization of the United Nations. But challenges ranging from climate change, soil quality, pest control, and shrinking land availability, not to mention water resource constraints, must be addressed.
So how can we increase yields in a sustainable, intelligent way?
We believe that the Internet of Things (IoT) technology and data-driven agriculture is one answer. In fact, IoT is already showing promising results.
Find out how IoT solves some of agriculture’s most vexing challenges by helping farmers connect fields and herds, reduce risks, streamline operations, and increase yield. To learn more, register for the IoT in Action event in Sydney on March 19, 2019.
How IoT is redefining agriculture
IoT offers countless benefits to agriculture in countless scenarios. Microsoft Project FarmBeats is a cost-effective, artificial intelligence (AI) and IoT platform that is based on Windows IoT devices and Azure cloud technologies. By combining low-cost sensors, drones, and vision and machine learning algorithms to map farms, Microsoft Project FarmBeats enables data-driven, precision agriculture, and the ability to increase density, quality, sustainability, and yield.
IoT-enabled sensors in the field can monitor everything from soil pH and quality to water saturation to ensure site-specific applications of irrigation, pesticides, and fertilizers. IoT provides opportunities for phenotyping and targeting seed varieties where they’ll best thrive. Drones and robots help monitor crops, identify optimal harvest times, and mitigate threats from pests and disease in real-time.
For operations that raise livestock or produce other animal products, connected field sensors and animal tags can be used to track and manage herds, monitor animal health and fertility, alert farmers to predators, and manage feed.
Of course, connecting devices and uploading data to the cloud can be especially challenging in rural areas. Microsoft has found a way to overlay WiFi signals over TV whitespaces—that is, unused TV channels—to transport data from sensors, drones, cameras, and tractors back to the farmer’s office. From there Azure IoT Edge running on PCs handles most of the computing, including Project FarmBeats AI and Computer Vision algorithms, and transmits data to the cloud, regardless of broadband speed.
Real-life applications of IoT in agriculture
As one of Australia’s fastest growing dairy farms, Australian Consolidated Milk (ACM), serves more than 180 farms and handles around 350 million liters of milk annually. Ensuring the quality and safety of milk is a top priority and maintaining the right temperature from collection to transport is key. One spoiled tanker-load of milk can cost up to $10,000 and have negative environmental impacts.
To help mitigate this, ACM is working with Advance Computing to trial a cloud-based IoT solution to provide greater visibility into milk temperature so that actions can be taken as soon as an anomaly is detected. The solution sends quality and temperature notifications to farmers in real-time so they can make necessary changes without delay.
Water is also a major concern in agriculture, consuming approximately 70 percent of our global water resources, according to the Food and Agriculture Organization. New Zealand-based Blackhills Farm is doing its part to lower that percentage.
Using the SCADAFarm system by WaterForce, which combines IoT solutions from Schneider Electric and Microsoft, Blackhills Farm is able to remotely monitor and control their irrigation system. Sprinklers can be customized for individual crops, soils, and moisture levels and be adjusted quickly for rain, heat, and other conditions. The solution has helped Blackhills Farm reduce water and power usage while realizing higher crop yields.
Meanwhile, during harvest season, Echuca-based Kagome receives some 180 tons of tomatoes arriving at their plant each hour. It enlisted the help of Advance Computing to devise an IoT-based solution that uses data from on farm sensors, in truck devices, and technology installed in Kagome’s loading bay to ensure the company has a clear window on its operations. Tracing shipments is now automated and information can be accessed anytime and anywhere. According to Kagome CEO Jason Fritsch, the solution has paid for itself five times over in the first season.
See how IoT is reshaping agriculture at IoT in Action in Sydney
IoT in Action is coming to Sydney on March 19, 2019. Register for this one-day, in-person event to discover how partners and customers are unlocking the potential of intelligent edge and intelligent cloud solutions to transform success in agriculture and other industries. Gain actionable insights around the latest topics in IoT business transformation, innovations in IoT security, the intelligent edge, and more. Plus, meet face-to-face with IoT experts, partners, and technical and business decision makers.
Announcing the public preview of Azure Premium Blob Storage. Premium Blob Storage is a new performance tier in Azure Blob Storage, complimenting the existing Hot, Cool, and Archive tiers. Premium Blob Storage is ideal for workloads with high transactions rates or require very fast access times, such as IoT, Telemetry, AI, and scenarios with humans in the loop such as interactive video editing, web content, online transactions, and more.
Service Fabric Processor is a new library for consuming events from an Event Hub that is directly integrated with Service Fabric, it uses Service Fabric's facilities for managing partitions, reliable storage, and for more sophisticated load balancing. Service Fabric Processor is currently in preview and available on NuGet. The source code and a sample application is available on GitHub. See post for links.
Introducing auto-tiering and auto-expiration functionalities to our “Azure Blob Storage on IoT Edge” module and are now available in public preview. Azure Blob Storage on IoT Edge is a light-weight, Azure-consistent module which provides local block blob storage allowing you to store and access data efficiently, process if required, and then automatically upload to Azure. These new features are available for Linux AMD64 and Linux ARM32.
Now generally available for Azure Database for MariaDB, secure server access with VNet service endpoints. VNet service endpoints enable you to isolate connectivity to your logical server from a given subnet within your virtual network. There is no additional billing for virtual network access through VNet service endpoints. The current pricing model for Azure Database for MariaDB applies as is.
Read replicas are now generally available to all Azure Database for MySQL users. For read-heavy workloads that you are looking to scale out, you can now use read replicas which make it easy to horizontally scale out beyond a single database server by supporting continuous asynchronous replication of data from one Azure Database for MySQL server to up to five Azure Database for MySQL servers in the same region.
Announcing the official release of Azure DevOps Server 2019, previously known as Team Foundation Server. Azure DevOps Server 2019 brings the power of Azure DevOps into your dedicated environment and you can install it into any datacenter or sovereign. Azure DevOps includes developer collaboration tools which can be used together or independently, including Azure Boards (Work), Azure Repos (Code), Azure Pipelines (Build and Release), Azure Test Plans (Test), and Azure Artifacts (Packages). These tools support all popular programming languages, any platform (including macOS, Linux, and Windows) or cloud, as well as on-premises environments. Azure DevOps Server 2019 is now generally available.
Announcing the general availability of Microsoft Azure from our new cloud regions in Cape Town and Johannesburg, South Africa. The launch of these regions marks a major milestone for Microsoft as we open our first enterprise-grade datacenters in Africa, becoming the first global provider to deliver cloud services from datacenters on the continent. The new regions provide the latest example of our ongoing investment to help enable digital transformation and advance technologies such as AI, cloud, and edge computing across Africa.
Announcing the general availability of SignalR Service bindings in Azure Functions. SignalR Service is a fully managed Azure service that simplifies the process of adding real-time web functionality to applications over HTTP. This real-time functionality allows the service to push messages and content updates to connected clients using technologies such as WebSocket. As a result, clients are updated without the need to poll the server or submit new HTTP requests for updates. SignalR Service bindings in Azure Functions is available in all global regions where Azure SignalR Service is available.
Announcing the general availability of Azure Databricks now with support for Data Engineering Light and Azure Machine Learning. Azure Databricks provides a fast, easy, and collaborative Apache Spark™-based analytics platform to accelerate and simplify the process of building big data and AI solutions backed by industry leading SLAs. Customers can now get started with Azure Databricks and a new low-priced workload called Data Engineering Light that enables customers to run batch applications on managed Apache Spark with the added benefit of having an optimized, autoscaling, collaborative workspace, automated machine learning, and end-to-end Machine Learning Lifecycle management.
Announcing the general availability of Azure Data Lake Gen 2 and Azure Data Explorer. With the latest release of Azure SQL Data Warehouse, Microsoft doubles-down on Azure SQL DW as one of the core data services for digital transformation on Azure. In addition to the fundamental benefits of agility, on-demand scaling, and unlimited compute availability, the most recent price-to-performance metrics from the GigaOM report are one of several compelling arguments made to customers for adopting Azure SQL DW together with Power BI for rich visualization, these enhanced set of capabilities cement Microsoft’s leadership position around Cloud Scale Analytics.
Announcing the launch of two new key capabilities to Azure Firewall: threat intelligence-based filtering and service tags filtering. Azure firewall can now be configured to alert and deny traffic to and from known malicious IP addresses and domains in near real-time as well as to use service tags in the network rules destination field. Azure Firewall is a cloud native firewall-as-a-service offering which enables customers to centrally govern all their traffic flows using a DevOps approach.
Announcing new Azure Security Center capabilities in Azure and Microsoft 365 that strengthen unified security management and advanced threat protection solutions for hybrid cloud workloads. Azure Security Center now leverages machine learning to reduce the attack surface of internet facing virtual machines. It’s adaptive application controls have been extended to Linux and on-premises servers, and extends the network map support to peered virtual network (VNet) configurations. If you have Azure Security Center in your Azure subscription, you can take advantage of these new capabilities for all your Internet-exposed Azure resources immediately.
To help organizations deploying IoT solutions to address security concerns, Microsoft co-authored and edited the Industrial Internet Consortium (IIC) IoT Security Maturity Model (SMM) Practitioner’s Guide. The SMM leads organizations as they assess the security maturity state of their current organization or system, and as they set the target level of security maturity required for their IoT deployment. Once organizations set their target maturity, the SMM gives them an actionable roadmap that guides them from lower levels of security maturity to the state required for their deployment.
Announcing the release of Bot Framework SDK version 4.3 with updates for the Conversational AI releases that let you connect with your users wherever your users are. This release includes new channel support for popular messaging apps, a simplified approach for activity message handling, web API integration for .NET developers, Web Chat support that lets developer add a messaging interface for their bot on websites or mobile apps, and more.
As the value of connectedness increases, enterprises need a mechanism to securely connect these devices that are already in service. But how do businesses leverage IoT for the billions of devices already in the field without creating a large security risk? Azure Sphere enables secure, connected, microcontroller- (MCU-) based devices by establishing a foundation on which an enterprise can trust a device to run securely in any environment. With an Azure Sphere-enabled device, enterprise customers can more confidently connect existing devices to the cloud and unlock scenarios related to preventive maintenance, optimizing utilization, and even role-based access control.
Data Integration is complex with many moving parts. It helps organizations combine data and complex business processes in hybrid data environments. Failures are very common in data integration workflows and require rerunning failed activities inside your data integration workflows. Azure Data Factory now allows you to rerun activities inside your pipelines. Get started building pipelines easily and quickly using Azure Data Factory.
Jasmine Greenaway shares how to customize the Azure Portal, creating tiles and multiple dashboard views. From adding clocks and gifs to greet you at log-in to creating, programmatically updating, and publishing dashboards for multiple purposes (demos, projects, sandbox/evaluation).
Working with Azure Resource Manager Templates provides you with a way to codify your infrastructure using JSON. In this tutorial, Jay Gordon shows how to get started with using different parameters alongside your template, and deploy to Azure -- all from the Azure CLI (command-line) tool.
In this step by step tutorial, Adi Polak shows you how to use the new Spark TensorFrame library - running on Azure Databricks - to start start working with TensorFlow on top of ApacheSpark (and why you'd want to).
Frank Boucher shows us how to use Azure Functions V2, VS Code, and the VS Code Azure Functions extension to automatically unzip files in Azure Blob Storage. Get a quick introduction to Azure Functions abd a few of Frank's favorite VS Code Azure Function extension tips and tricks.
In this blog post, David references architecture for real-time scoring with R, published in Microsoft Docs, and describes a Kubernetes-based system to distribute the load to R sessions running in containers.
The third in a series of posts on Azure Stack, this installment focuses on Fundamentals of IaaS. Azure Stack is an instance of the Azure cloud that you can run in your own datacenter. Microsoft has taken the experience and technology from running one of the largest clouds in the world to design a solution you can host in your facility. This forms the foundation of Azure Stack’s infrastructure-as-service (IaaS).
In digital economy, customers are struggling to build and keep the talent pool to develop digital assets to meet their business demands and stay competitive in the market place by improving employees’ skillset. With the Azure Lab Services, customers can quickly set up classroom labs for its employees to gain practical experience not only with the latest technologies, but also with their internal and external business applications.
With the strategic value of APIs, a continuous integration (CI) and continuous deployment (CD) pipeline has become an important aspect of API development allowing organizations to automate deployment of API changes without error-prone manual steps; as well as to detect issues earlier and deliver value to end users faster. Walk through a conceptual framework for implementing a CI/CD pipeline for deploying changes to APIs published with Azure API Management.
Companies are faced with the trade-offs for having an on-premises security solution or the convenience of moving data to the cloud. SQL Server and Azure SQL Database now provide the most consistent hybrid data platform with frictionless migration across on-premises, cloud, and private cloud; all at a lower cost. Review three reasons why Microsoft should be your hybrid data platform of choice.
Get insights about e-signature requirements with Team Foundation Server, Azure DevOps Services, and Azure DevOps Server in this post that describes how to satisfy the Code of Federal Regulations, Title 21, PART 11 ELECTRONIC RECORDS; ELECTRONIC SIGNATURES requirements.
Azure SQL Managed Instance has predefined storage space that depends on the values of reserved storage and vCores that you choose when you provision the instance. See how to check remote storage usage, create alerts using SQL Agent, and monitor storage space on the Managed Instance.
Did you know you can integrate with SAP from PowerApps and Flow using Azure Logic Apps? Read this post to see how to connect PowerApps & Flow with SAP in an end-to-end working example.
The Azure Podcast commemorates International Women's Day 2019, as the team talks to Chole Condon, a Senior Cloud Developer Advocate at Microsoft, about her Azure learning journey and her experience as a woman in cloud computing.
On this episode of the Azure Podcast, Paresh Mundade, a Senior PM in the Azure ExpressRoute team, presents an update on the service and a glimpse into the roadmap of planned features.
In this episode of Five Things About Azure Functions, John Papa and Jeff Hollan bring you five reasons you should check out Azure Functions today. You can also listen to Jeff dive deeper into serverless on his recent episode of Real Talk JavaScript.
Gen Studio is a prototype concept which was created over a two-day hackathon with collaborators across The Metropolitan Museum of Art (The Met), Microsoft, and Massachusetts Institute of Technology (MIT). Gen Studio uses Microsoft AI to allow you to visually and creatively navigate the shared features and dimensions underlying The Met’s Open Access collection. Within the Gen Studio is a tapestry of experiences based on generative adversarial networks (GANs) which allow you to explore, search, and even be immersed within the latent space underlying The Met’s encyclopedic collection.
Clean Water AI is a device that uses a deep learning neural network to detect dangerous bacteria and harmful particles in water. Users can see drinking water at a microscopic level, just like they would view footage from a security camera, with real-time detection and contamination mapping.
This session provides an overview of some of the more popular privacy features employed by private consortiums to enable sharing data only with specific participants in a network. This is implemented in a variety of ways and the architecture of these are discussed with a brief demo using the Quorum blockchain.
One thing you really have to consider when bringing Artificial Intelligence to the edge is the hardware you will need to run these powerful algorithms. Ted Way from the Azure Machine Learning team joins Olivier on the IoT Show to discuss hardware acceleration at the Edge for AI.
Come learn how to use Azure Maps to provide location intelligence in different areas of transportation such as fleet management, asset tracking, and logistics.
Brady Gaster joins Cecil Phillip to show how easy it is to add real-time functionality to your web applications using ASP.NET Core SignalR. They discuss topics such as targeting with clients, SignalR transports, and options for running your SignalR application in the cloud. Now, you can even leverage the Hub protocol spec is the available on GitHub if you're interested in creating your own SignalR client.
Learn what Azure DevOps projects are and how to use them with Node.js and Azure Kubernetes Service. In part 1, you’ll learn how Azure DevOps projects makes it easy for you to create and build deployments.
Learn how to easily manage virtual machine connectivity through the Azure Portal. You’ll learn how to manage virtual machine network security groups for virtual network subnets and virtual machines.
In this episode, Jeffrey Palermo and Greg Leonardo continue their conversation on deploying Azure — this time going deeper as they discuss some of the topics from Greg's book, Hands-On Cloud Solutions with Azure: Architecting, developing, and deploying the Azure way; infrastructure as code; provisioning environments; how to watch your environments; and much more on what developers targeting Azure need to know.
The Azure Communications team is hosting a special "Ask Me Anything" (AMA) session on Reddit and Twitter. Look for the Reddit session Monday, March 11th, from 10:00 AM to noon PST. Participate by posting to the /r/Azure subreddit when the AMA is live. Look for the Twitter session on Wednesday, March 13th, from 10:00 AM to noon PST. Be sure to follow @AzureSupport before March 13th and tweet us during the event using the hashtag #AzureCommsAMA.
Announcing a Microsoft and Intel partnership to bring optimized deep learning frameworks to Azure. Over the last few years, deep learning has become the state of the art for several machine learning and cognitive applications. Innovations in deep neural networks in these domains have enabled these algorithms to reach human level performance in vision, speech recognition, and machine translation. The Intel Optimized Data Science VM is an ideal environment to develop and train deep learning models on Intel Xeon processor-based VM instances on Azure. These optimizations are available in a new offering on the Azure marketplace called the Intel Optimized Data Science VM for Linux (Ubuntu).
Our partners are delivering more innovation in AI by expanding their business through co-selling opportunities and leveraging distribution options through our commercial marketplaces such as Azure Marketplace and AppSource. We are now rolling out an initial set of platform changes to open new opportunities for our partners to go to market with Microsoft. Get a sneak peek on our public marketplace roadmap.
Lars gives us the latest Azure news from his farm in rural Australia! He discusses a new intelligent security tool called Microsoft Azure Sentinel, Azure Monitor AIOps Alerts with Dynamic Thresholds, and Java support for Azure Functions which is now in general availability.
IT organizations are under more pressure than ever to do more with less, they are expected to drive competitive advantage and innovation with higher quality while managing smaller teams. This shift to the enterprise cost-to-value equation has created a transformative inflection point across every business domain, underpinned by new enabling technologies and development paradigms. Organizations must now adapt by adopting rapid and strategic transformation while simultaneously working diligently to keep the lights on, and all with the important goal of reducing costs. When done right, three clear benefits appear:
Going cloud native means more than simply offloading datacenter costs and complexity. It means software architecture that is loosely coupled can allow features and bug fixes to be shipped whenever and wherever they need to be by smaller development, QA, release, and production support teams.
DevOps is more than a facelift on release management. New pipeline tools coupled with new design and development patterns spur a cultural shift occurring inside IT organizations. DevOps is a revolution in how software is created and supported.
Modernizing existing legacy applications and infrastructure doesn’t require a massive, time consuming, and expensive re-write. Through the judicious application of microservices, new development, and delivery methodologies, the elephant can be eaten “one bite at a time” with the added benefits of reducing costs, feeding innovation, and ensuring greater stability and quality across production scenarios.
While the value of these three benefits are explicit and obvious, the investment to make these changes can be prohibitive in both cost and time. Often there is no clear starting point or an easily discernible roadmap to success.
Accelerate the transformation and ensure the outcome
To address these challenges, Sirrus7, GitHub, and HashiCorp have joined together to create the DevOps Acceleration Engine. This is an enterprise-grade, out-of-the-box, integrated DevOps infrastructure executed and demonstrated through a tailored four month engagement.
Seamlessly integrated for the immediate creation of value, the DevOps Acceleration Engine brings together best of breed industry leading tools at a discount, including GitHub Enterprise, Terraform Enterprise, and CircleCI or the customer’s CI/CD tool of choice, all delivered in a highly targeted, success-driven engagement by a team of proven industry experts who work directly onsite with customers.
The solution drives transformation at a significantly lower cost, in both time and capital, with faster execution by pre-integrating these best of breed technologies. Customers receive:
An integrated platform
Discounted licensing costs
An experienced team of experts specializing in best practices and methodologies
Engagement period focused on working hand-in-hand with customers, and moving features/fixes from backlog to production faster with higher quality than ever before
You've followed an excellent walkthrough and built a solid prototype web app. You run npm start locally and browse to http://localhost and all looks great. Now you're ready to put your app in the cloud, utilize a managed database and managed authentication, and share a link with all your coworkers and friends. But wait a minute, it looks like you'll first have to set up cloud pipelines and container images, then brush up on Bash or PowerShell and write a Dockerfile. Getting your app to the cloud is more work than you anticipated. Is there a faster way?
We're happy to share that yes there is a faster way. When you need to focus on app code you can delegate build and deployment to Azure with App Service web apps. Push your git repo directly to Azure or point to one hosted in GitHub, Azure DevOps, or BitBucket and we'll take care of building and running your code the way you expect. You may be using this already for your .NET apps; we now support Node.js and Python as well.
Do you write apps in Node.js, JavaScript, or TypeScript? We'll install your dependencies and use the build steps specifed in your package.json scripts. Prefer yarn over npm? Include a yarn.lock file or use the engines field in package.json and we're happy to oblige.
Perhaps you're a pythonist and prefer Django? Well then, we'll install your dependencies as specified in requirements.txt, prepare your static assets by running collectstatic, and give you a post-build hook to apply database migrations. We'll even run the application module from Django's conventional wsgi.py file with gunicorn. We also support other WSGI frameworks like Flask, Bottle or Pyramid; configuration details are here.
We're happy to now support Node.js and Python but realize our work is far from done. Please participate in our questionnaire so we can ensure your needs and scenarios are covered.
Finally, visit our issue tracker to ask questions, offer suggestions, or submit a pull request. Happy coding!
The Azure Data Box offline family lets you transfer hundreds of terabytes of data to Microsoft Azure in a quick, inexpensive, and reliable manner. We are excited to share that support for managed disks is now available across the Azure Data Box family of devices, which includes Data Box, Data Box Disk, and Data Box Heavy.
With managed disks support on Data Box, you can now move your on-premises virtual hard disks (VHDs) as managed disks in Azure with one simple step. This allows you to save a significant amount of time in lift and shift migration scenarios.
How managed disks work with Data Box solution?
The Data Box family supports the following managed disk types: Premium SSD, Standard SSD, and Standard HDD. When you place your order for any of the Data Box data transfer solutions in the Azure portal, you can now select your storage destination as managed disks and specify the resource groups for ingestion. You will be asked to select a staging storage account, which is used to stage VHDs as page blobs and to then convert page blobs to managed disks.
When your Data Box device arrives, it will have the shares or folders corresponding to the selected resource groups. These shares or folders are further broken down by managed disk storage types – Premium SSD, Standard SSD, and Standard HDD. Copying your data to the target managed disk type is as easy as copying the VHDs to the corresponding folders using utility like robocopy or just drag and drop.
For more information on movement to managed disks, please refer to the following,
You can also place an order for a Data Box today and import your VHDs as managed disks. Please continue to provide your valuable thoughts and comments by posting on Azure Feedback.
Welcome to the Cloud Commercial Communities monthly webinar and podcast update. Each month the team focuses on core programs, updates, trends, and technologies that Microsoft partners and customers need to know to increase success using Azure and Dynamics. Make sure you catch a live webinar and participate in live QA. If you miss a session, you can review it on demand. Also consider subscribing to the industry podcasts to keep up to date with industry news.
Optimize Your Marketplace Listing with Featured Apps and Services Tuesday, February 5, 2019 11:00 AM PST
Do you have an application or service listed on Azure Marketplace or AppSource? Looking to optimize your listing to be more discoverable by customers? Discoverability in Azure Marketplace and AppSource can be optimized in a variety of ways. Join this session to learn about how you can gain more visibility for your listings by optimizing content, using keywords, adding trials, and about what matters to Microsoft for Featured Apps and Featured Services on Azure Marketplace and AppSource.
Leveraging Free Azure Sponsorship to Grow Your Business on Azure Tuesday, February 12, 2019 10:00 AM PST
Microsoft has made significant investments in our partners and customers to help them meet today’s complex business challenges and drive business growth. Through Microsoft Azure Sponsorship, partners and customers can get access to free Azure based on their deployment and technical needs. Azure Sponsorship is available to new and existing Azure customers looking to try new partner solutions, and to partners working to build their solutions on Azure.
Get the Most Out of Azure with Azure Advisor Tuesday, February 19, 2019 10:00 AM PST
Azure Advisor is a free Azure service that analyzes your configurations and usage and provides personalized recommendations to help you optimize your resources for high availability, security, performance, and cost. In this demo-heavy webinar, you’ll learn how to review and remediate Azure Advisor recommendations so you can stay on top of Azure best practices and get the most out of your Azure investment both for your own organization and your customers.
Incidents, Maintenance, and Health Advisories: Stay Informed with Azure Service Health Tuesday, February 26, 2019 10:00 AM PST
Azure Service Health is a free Azure service that provides personalized alerts and guidance when Azure service issues affect you. It notifies you, helps you understand the impact to your resources, and keeps you updated as the issue is resolved. It can also help you prepare for planned maintenance and changes that could affect the availability of your resources. In this demo-heavy webinar, you’ll learn how to use Azure Service Health keep both your organization and your customers informed about Azure service incidents.
Introducing a New Approach to Learning: Microsoft Learn Wednesday, February 27, 2019 11:00 AM PST
At Microsoft Ignite 2019, Microsoft launched an exciting new learning platform called Microsoft Learn. During this session, we will provide a demo and overview of the platform, the inspiration and vision of its design, and how we have adapted training to modern learning styles.
Protecting your IaaS virtual machine based applications
Azure Stack is an extension of Azure that lets you deliver IaaS Azure services from your organization’s datacenter. Consuming IaaS services from Azure Stack requires a modern approach to business continuity and disaster recovery (BC/DR). If you’re just starting your journey with Azure and Azure Stack, make sure to work through a comprehensive BC/DR strategy so your organization understands the immediate and long-term impact of modernizing applications in the context of cloud. If you already have Azure Stack, keep in mind that each application must have a well-articulated BC/DR plan calling out the resiliency, reliability, and availability requirements that meet the business needs of your organization.
What Azure Stack is and what it isn’t
Since launching Azure Stack at Ignite 2017, we’ve received feedback from many customers on the challenges they face within their organization evangelizing Azure Stack to their end customers. The main concerns are the stark differences from traditional virtualization. In the context of modernizing BC/DR practices, three misconceptions stand out:
Azure Stack is just another virtualization platform
Azure Stack is delivered as an appliance on prescriptive hardware co-engineered with our integrated system partners. Your focus must be on the services delivered by Azure Stack and the applications your customers will deploy on the system. You are responsible for working with your applications teams to define how they will achieve high availability, backup recovery, disaster recovery, and monitoring in the context of modern IaaS, separate from infrastructure running the services.
I should be able to use the same virtualization protection schemes with Azure Stack
Azure Stack is delivered as a sealed system with multiple layers of security to protect the infrastructure. Constraints include:
Scale unit nodes and infrastructure services have code integrity enabled.
At the networking layer, the traffic flow defined in the switches is locked down at deployment time using access control lists.
Given these constraints, there is no opportunity to install backup/replication agents on the scale-unit nodes, grant access to the nodes from an external device for replication and snapshotting, or physically attach external storage devices for storage level replication to another site.
Another ask from customers is the possibility of deploying one Azure Stack scale-unit across multiple datacenters or sites. Azure Stack doesn’t support a stretched or multi-site topology for scale-units. In a stretched deployment, the expectation is that nodes in one site can go offline with the remaining nodes in the secondary site available to continue running applications. From an availability perspective, Azure Stack only supports N-1 fault tolerance, so losing half of the node count will take the system offline. In addition, based on how scale-units are configured, Azure Stack only supports fault domains at a node level. There is no concept of a site within the scale-unit.
I am not deploying modern applications in Azure, none of this applies to me
Azure Stack is designed to offer cloud services in your datacenter. There is a clear separation between the operation of the infrastructure and how IaaS VM-based applications are delivered. Even if you’re not planning to deploy any applications to Azure, deploying to Azure Stack is not “business as usual” and will require thinking through the BC/DR implications throughout the entire lifecycle of your application.
Define your level of risk tolerance
With the understanding that Azure Stack requires a different approach to BC/DR for your IaaS VM-based applications, let’s look at the implications of having one or more Azure Stack systems, the physical and logical constructs in Azure Stack, and the recovery objectives you and your application owners need to focus on.
How far apart will you deploy Azure Stack systems
Let’s start by defining the impact radius you want to protect against in the event of a disaster. This can be as small as a rack in a co-location facility or an entire region of a country or continent. Within the impact radius, you can choose to deploy one or more Azure Stack systems. If the region is large enough you may even have multiple datacenters close together, each with Azure Stack systems. The key takeaway is that if the site goes offline due to a disaster or catastrophic event, there is no amount of redundancy that will keep the Azure systems online. If your intent is to survive the loss of an entire site as the diagram below shows, then you must consider deploying Azure Stack systems into multiple geographic locations separated by enough distance so a disaster in one location does not impact any other locations.
Help your application owners understand the physical and logical layers of Azure Stack
Next it’s important to understand the physical and logical layers that come together in an Azure Stack environment. The Azure Stack system running all the foundational services and your applications physically reside within a rack in a datacenter. Each deployment of Azure Stack is a separate instance or cloud with its own portal. The diagram below shows the physical and logical layering that’s common for all Azure Stack systems deployed today and for the foreseeable future.
Define the recovery time objectives for each application with your application owners
Now that you have a clear understanding of your risk tolerance if a system goes offline, you need to decide the protection schemes for your applications. You need to make sure you can quickly recover applications and data on a healthy system. We’re talking about making sure your applications are designed to be highly available within a scale-unit using availability sets to protect against hardware failures. In addition, you should also consider the possibility of an application going offline due to corruption or accidental deletion. Recovery can be as simple as scaling-out an application or restoring from a backup.
To survive an outage of the entire system, you’ll need to identify the availability requirements of each application, where the application can run in the event of an outage, and what tools you need to introduce to enable recovery. If your application can run temporarily in Azure, you can use services like Azure Site Recovery and Azure Backup to protect your application. Another option is to have additional Azure Stack systems fully deployed, operational, and ready to run applications. The time required to get the application running on a secondary system is the recovery time objective (RTO). This objective is established between you and the application owners. Some application owners will only tolerate minimal downtime while others are ok with multiple days of downtime if the data is protected in a separate location. Achieving this RTO will differ from one application to another. The diagram below summarizes the common protection schemes used at the VM or application level.
In the event of a disaster, there will be no time to request an on-demand deployment of Azure Stack to a secondary location. If you don’t have a deployed system in a secondary location, you will need to order one from your hardware partner. The time required to deliver, install, and deploy the system is measured in weeks.
Establish the offerings for application and data protection
Now that you know what you need to protect on Azure Stack and your risk tolerance for each application, let’s review some specific patterns used with IaaS VMs.
Data protection
Applications deployed into IaaS VMs can be protected at the guest OS level using backup agents. Data can be restored to the same IaaS VM, to a new VM on the same system, or a different system in the event of a disaster. Backup agents support multiple data sources in an IaaS VM such as:
Disk: This requires block-level backup of one, some, or all disks exposed to the guest OS. It protects the entire disk and captures any changes at the block level.
File or folder: This requires file system-level backup of specific files and folders on one, some, or all volumes attached to the guest OS.
OS state: This requires backup targeted at the OS state.
Application: This requires a backup coordinated with the application installed in the guest OS. Application-aware backups typically include quiescing input and output in the guest for application consistency (for example, Volume Shadow Copy Service (VSS) in the Windows OS).
Application data replication
Another option is to use replication at the guest OS level or at the application level to make data available in a different system. The replication isn’t offloaded to the underlying infrastructure, it’s handled at the guest OS or above. One example is applications like SQL support asynchronous replication in a distributed availability group.
High availability
For high availability, you need to start by understanding the data persistence model of your applications:
Stateful workloads write data to one or more repositories. It’s necessary to understand which parts of the architecture need point-in-time data protection and high availability to recover from a catastrophic event.
Stateless workloads on the other hand don’t contain data that needs to be protected. These workloads typically support on-demand scale-up and scale-down and can be deployed in multiple locations in a scale-out topology behind a load balancer.
To support application level high availability within an Azure Stack system, multiple virtual machines are grouped into an availability set. Applications deployed in an availability set sit behind a load balancer that distributes incoming traffic randomly among multiple virtual machines.
Across Azure Stack systems, a similar approach is possible with the following differences; The load balancer must be external to both systems or in Azure (i.e. Traffic Manager). Availability sets do not span across independent Azure Stack systems.
Conclusion
Deploying your IaaS VM-based applications to Azure and Azure Stack requires a comprehensive evaluation of your BC/DR strategy. “Business as usual” is not enough in the context of cloud. For Azure Stack, you need to evaluate the resiliency, availability, and recoverability requirements of the applications separate from the protection schemes for the underlying infrastructure.
You must also reset end user expectations starting with the agreed upon SLAs. Customers onboarding their VMs to Azure Stack will need to agree to the SLAs that are possible on Azure Stack. For example, Azure Stack will not meet the stringent zero data loss requirements required by some mission critical applications that rely on storage level synchronous replication between sites. Take the time to identify these requirements early on and build a successful track record of onboarding new applications to Azure Stack with the appropriate level of protection and disaster recovery.
We are excited to share the public preview of AzCopy in Azure Storage Explorer. AzCopy is a popular command line utility that provides performant data transfer into and out of a storage account. The new version of AzCopy further enhances the performance and reliability through a scalable design, where concurrency is scaled up according to the number of machine’s logical cores. The tool’s resiliency is also improved by repeated retries.
Azure Storage Explorer provides the UI interface for various storage tasks, and now it supports using AzCopy as a transfer engine to provide the highest throughput for transferring your files for Azure Storage. This capability is available today as a preview in Azure Storage Explorer.
Enable AzCopy for blob upload and download
We have heard from many of you that the performance of your data transfer matters. Let’s be honest, we all have better things to do than wait around for files to be transferred to Azure. Now with AzCopy in Azure Storage Explorer, we give you all that time back!
With AzCopy preview, the blob operations will be faster than before. To enable this option, go to the Preview menu and select Use AzCopy for improved blob Upload and Download.
We are working on the support for Azure Files and batch blob deletes. Feel free letting us know what you would like to see supported through our GitHub repository.
Figure 1: Enable AzCopy in Azure Storage Explorer
How fast is it?
With a quick test in our environment we were able to see great improvements in uploading files with AzCopy in Azure Storage Explorer. Note that the times may vary on each machine.
Storage Explorer
Storage Explorer w/AzCopyV10
Improvement
10K 100KB files
1 hour 36 minutes
59 seconds
98.9 percent
100 100MB
5 minutes 12 seconds
1 minute 35 seconds
69.5 percent
1 10GB file
3 minutes 41 seconds
1 minute 40 seconds
54.7 percent
Figure 2: Performance improvement from using AzCopy as transfer engine for blog upload and download
Figure 3: AzCopy uploads/downloads blobs efficiently (1 x 10GB file)
Figure 4: AzCopy uploads/downloads blobs efficiently (10,000 x 10KB files)
Next steps
We invite you to try out the AzCopy preview feature in Azure Storage Explorer today, and we look forward to hearing your feedback. If you identify any problems or want to make a feature suggestion, please make sure to report your issue on our GitHub repository.
Original equipment manufacturers (OEMs) make the wheels go round for the business world. But demand for faster, cheaper, and smarter products and components put major downward pressure on profit margins. Successful OEMs are always on the lookout for opportunities to drive down costs and differentiate their brands and the rise of the Internet of Things (IoT) offers a golden opportunity to do so by embracing fundamental supply chain transformation.
To get a better understanding of the benefits, best practices, and current state of play in supply chain transformation, we enlisted The Economist Intelligence Unit to survey 250 senior executives at OEMs in North America, Europe, and Asia-Pacific. Our learnings from those conversations drove insights for the basis of the new study, Putting customers at the center of the supply chain. Here are some of the intriguing highlights.
Creating the intelligent supply chain
According to the study, 99 percent of OEMs believe supply chain transformation is important to meet their organizations’ strategic objectives. The vast majority, 97 percent, consider cloud technology to be an essential component of that transformation, which makes sense given that cloud offers the unprecedented ability to collect and analyze data at scale. To date, just 61 percent have embraced cloud across their organization—meaning that for many, cloud remains an obvious and notable opportunity.
Beyond cloud, IoT presents a significant opportunity for OEMs. IoT is the fundamental technology underpinning smart products and components, like embedded sensors that monitor performance, or telemetry systems on connected vehicles.
IoT-enabled products and components can effectively extend the supply chain to include the customer, enabling the delivery of software updates directly, while providing ongoing access to data about how offerings are being used. This adds supply-chain complexity but also delivers significant new business opportunities.
This extension of the supply chain gives OEMs the ability to get a far deeper understanding of customer behaviors and needs and to better serve customers via add-on services based on that deeper understanding. To optimize the value of the customer data they collect, some are even embracing entirely new business models.
Armed with real, data-based insights into exactly how and when their products are being used, OEMs can become service providers, and shift from selling products to customers to charging them subscription or per-use fees. Rolls-Royce, for example, charges a monthly fee for customers of its jet engines that is based on flying hours. Industrial machinery makers like Sandvik Coromant are also now charging customers based on use.
Other emerging technologies that OEMs are turning to for assistance in transforming supply chains include robotics that generate valuable data while performing tasks like product assembly and order picking faster and with greater accuracy than humans, artificial intelligence (AI) that’s used in smart products for things like predictive maintenance, and blockchain which enables supply-chain stakeholders to share an immutably accurate record of deliveries. These technologies can supercharge the collection, management, analysis, and security of supply-chain data. And like IoT, they can drive the creation of brand new ways of doing business.
Best practices in supply-chain transformation
In a world where a growing number of things around us collect data about us, forward-thinking OEMs are increasingly embracing fundamental changes in their supply chains. With the goal of achieving operational excellence informed by a closed feedback loop with the customer, OEMs can deliver better service and products by better understanding and anticipating exactly what customers want and need.
To achieve this vision, they’re turning to technologies like cloud, IoT, AI, robotics, and blockchain. Learn more about the specific steps and approaches being taken in the full Economist report.
Customers are taking to Microsoft Teams faster than any app in Microsoft history, and one of the reasons why is thanks to the powerful partner integrations available. Learn about some of the latest integrations that you can start leveraging today.
When your Azure resources go down, one of your first questions is probably, “Is it me or is it Azure?” Azure Service Health helps you stay informed and take action when Azure service issues like incidents and planned maintenance affect you by providing a personalized health dashboard, customizable alerts, and expert guidance.
In this blog, we’ll cover how you can use Azure Service Health’s personalized dashboard to stay informed about issues that could affect you now or in the future.
Monitor Azure service issues and take action to mitigate downtime
You may already be familiar with the Azure status page, a global view of the health of all Azure services across all Azure regions. It’s a good reference for major incidents with widespread impact, but we recommend using Azure Service Health to stay informed about Azure incidents and maintenance. Azure Service Health only shows issues that affect you, provides information about all incidents and maintenance, and has richer capabilities like alerting, shareable updates and RCAs, and other guidance and support.
Azure Service Health tracks three types of health events that may impact you:
Service issues: Problems in Azure services that affect you right now.
Planned maintenance: Upcoming maintenance that can affect the availability of your services in the future. Typically communicated at least seven days prior to the event.
Health advisories: Health-related issues that may require you to act to avoid service disruption. Examples include service retirements, misconfigurations, exceeding a usage quota, and more. Usually communicated at least 90 days prior, with notable exceptions including service retirements, which are announced at least 12 months in advance, and misconfigurations, which are immediately surfaced.
Azure Service Health’s dashboard provides a large amount of information about incidents, planned maintenance, and other health advisories that could affect you. While you can always visit the dashboard in the portal, the best way to stay informed and take action is to set up Azure Service Health alerts. With alerts, as soon as we publish any health-related information, you’ll get notified on whichever channels you prefer, including email, SMS, push notification, webhook into ServiceNow, and more.
Below are a few resources to help you get started: