Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

.NET Core January 2020 Updates – 2.1.15, 3.0.2, and 3.1.1

$
0
0

Today, we are releasing the .NET Core January 2020 Update. These updates also contain security and reliability fixes. See the individual release notes for details on updated packages.

NOTE: If you are a Visual Studio user, there are MSBuild version requirements so use only the .NET Core SDK supported for each Visual Studio version. Information needed to make this choice will be seen on the download page. If you use other development environments, we recommend using the latest SDK release.

Getting the Update

The latest .NET Core updates are available on the .NET Core download page. This update will be included in a future update of Visual Studio.

See the .NET Core release notes ( 2.1.15 | 3.0.2 | 3.1.1 ) for details on the release, including issues fixed and affected packages.

Docker Images

.NET Docker images have been updated for today’s release. The following repos have been updated.

Note: You must pull updated .NET Core container images to get this update, with either docker pull or docker build –pull.

Security

CVE-2020-0602: ASP.NET Core Denial of Service Vulnerability

Microsoft is releasing this security advisory to provide information about a vulnerability in ASP.NET Core. This advisory also provides guidance on what developers can do to update their applications to remove this vulnerability. Microsoft is aware of a denial of service vulnerability exists when ASP.NET Core improperly handles web requests. An attacker who successfully exploited this vulnerability could cause a denial of service against an ASP.NET Core web application. The vulnerability can be exploited remotely, without authentication. A remote unauthenticated attacker could exploit this vulnerability by issuing specially crafted requests to the ASP.NET Core application. The update addresses the vulnerability by correcting how the ASP.NET Core web application handles web requests.

CVE-2020-0603: ASP.NET Core Remote Code Execution Vulnerability

Microsoft is releasing this security advisory to provide information about a vulnerability in ASP.NET Core. This advisory also provides guidance on what developers can do to update their applications to remove this vulnerability. Microsoft is aware of a remote code execution vulnerability exists in ASP.NET Core software when the software fails to handle objects in memory. An attacker who successfully exploited this vulnerability could cause a denial of service against an ASP.NET Core web application. The vulnerability can be exploited remotely, without authentication. A remote unauthenticated attacker could exploit this vulnerability by issuing specially crafted requests to the ASP.NET Core application. The update addresses the vulnerability by correcting how the ASP.NET Core web application handles in memory.

CVE-2020-0605: .NET Core Remote Code Execution Vulnerability

Microsoft is releasing this security advisory to provide information about a vulnerability in .NET Core. This advisory also provides guidance on what developers can do to update their applications to remove this vulnerability. Microsoft is aware of a remote code execution vulnerability exists in .NET software when the software fails to check the source markup of a file. An attacker who successfully exploited the vulnerability could run arbitrary code in the context of the current user. Exploitation of the vulnerability requires that a user open a specially crafted file with an affected version of .NET Core. In an email attack scenario, an attacker could exploit the vulnerability by sending the specially crafted file to the user and convincing the user to open the file. The security update addresses the vulnerability by correcting how .NET Core checks the source markup of a file.

CVE-2020-0606: .NET Core Remote Code Execution Vulnerability

Microsoft is releasing this security advisory to provide information about a vulnerability in .NET Core. This advisory also provides guidance on what developers can do to update their applications to remove this vulnerability. Microsoft is aware of a remote code execution vulnerability exists in .NET software when the software fails to check the source markup of a file. An attacker who successfully exploited the vulnerability could run arbitrary code in the context of the current user. Exploitation of the vulnerability requires that a user open a specially crafted file with an affected version of .NET Core. In an email attack scenario, an attacker could exploit the vulnerability by sending the specially crafted file to the user and convincing the user to open the file. The security update addresses the vulnerability by correcting how .NET Core checks the source markup of a file.

The post .NET Core January 2020 Updates – 2.1.15, 3.0.2, and 3.1.1 appeared first on .NET Blog.


New Azure blueprint for CIS Benchmark

$
0
0

We’ve released our newest Azure blueprint that maps to another key industry-standard, the Center for Internet Security (CIS) Microsoft Azure Foundations Benchmark. This follows the recent announcement of our Azure blueprint for FedRAMP moderate and adds to the growing list of Azure blueprints for regulatory compliance, which now includes ISO 27001, NIST SP 800-53, PCI-DSS, UK OFFICIAL, UK NHS, and IRS 1075.

Azure Blueprints is a free service that enables cloud architects and central information technology groups to define a set of Azure resources that implements and adheres to an organization's standards, patterns, and requirements. Azure Blueprints makes it possible for development teams to rapidly build and stand up new trusted environments within organizational compliance requirements. Customers can apply the new CIS Microsoft Azure Foundations Benchmark blueprint to new subscriptions as well as existing environments.

CIS benchmarks are configuration baselines and best practices for securely configuring a system developed by CIS, a nonprofit entity whose mission is to ”identify, develop, validate, promote, and sustain best practice solutions for cyber defense.” A global community collaborates in a consensus-based process to develop these internationally recognized security standards for defending IT systems and data against cyberattacks. Used by thousands of businesses, they offer prescriptive guidance for establishing a secure baseline system configuration. System and application administrators, security specialists, and others who develop solutions using Microsoft products and services can use these best practices to assess and improve the security of their applications.

Each of the CIS Microsoft Azure Foundations Benchmark recommendations are mapped to one or more of the 20 CIS Controls that were developed to help organizations improve their cyber defense. The blueprint assigns Azure Policy definitions to help customers assess their compliance with the recommendations. Major elements of all nine sections of the recommendations from the CIS Microsoft Azure Foundation Benchmark v1.1.0 include:

Identity and Access Management (1.0)

  • Assigns Azure Policy definitions that help you monitor when multi-factor authentication isn't enabled on privileged Azure Active Directory accounts.
  • Assigns an Azure Policy definition that helps you monitor when multi-factor authentication isn't enabled on non-privileged Azure Active Directory accounts.
  • Assigns Azure Policy definitions that help you monitor for guest accounts and custom subscription roles that may need to be removed.

Security Center (2.0)

  • Assigns Azure Policy definitions that help you monitor networks and virtual machines where the Security Center standard tier isn't enabled.
  • Assigns Azure Policy definitions that helps you ensure that virtual machines are monitored for vulnerabilities and remediated, endpoint protection is enabled, system updates are installed on virtual machines.
  • Assigns an Azure Policy definition that helps you ensure virtual machine disks are encrypted.

Storage Accounts (3.0)

  • Assigns an Azure Policy definition that helps you monitor storage accounts that allow insecure connections.
  • Assigns an Azure Policy definition that helps you monitor storage accounts that allow unrestricted access.
  • Assigns an Azure Policy definition that helps you monitor storage accounts that don't allow access from trusted Microsoft services.

Database Services (4.0)

  • Assigns an Azure Policy definition that helps ensure SQL Server auditing is enabled as well as properly configured, and logs are retained for at least 90 days.
  • Assigns an Azure Policy definition that helps you ensure advanced data security notifications are properly enabled.
  • Assigns an Azure Policy definition that helps you ensure that SQL Servers are configured for encryption and other security settings.

Logging and Monitoring (5.0)

  • Assigns Azure Policy definitions that help you ensure a log profile exists and is properly configured for all Azure subscriptions, and activity logs are retained for at least one year.

Networking (6.0)

  • Assigns an Azure Policy definition that helps you ensure Network Watcher is enabled for all regions where resources are deployed.

Virtual Machines (7.0)

  • Assigns an Azure Policy definition that helps you ensure disk encryption is enabled on virtual machines.
  • Assigns an Azure Policy definition that helps you ensure that only approved virtual machine extensions are installed.
  • Assigns Azure Policy definitions that help you ensure that system updates are installed, and endpoint protection is enabled on virtual machines.

Other Security Considerations (8.0)

  • Assigns an Azure Policy definition that helps you ensure that key vault objects are recoverable in the case of accidental deletion.
  • Assigns an Azure Policy definition that helps you ensure role-based access control is used to managed permissions in Kubernetes service clusters

AppService (9.0)

  • Assigns an Azure Policy definition that helps you ensure web applications are accessible only over secure connections.
  • Assigns Azure Policy definitions that help you ensure web applications are only accessible using HTTPS, use the latest version of TLS encryption, and are only reachable by clients with valid certificates.
  • Assigns Azure Policy definitions to ensure that .Net Framework, PHP, Python, Java, and HTTP versions are the latest.

Azure customers seeking to implement compliance with CIS Benchmarks should note that although this Azure Blueprint may help customers assess compliance with particular configuration recommendations, it does not ensure full compliance with all requirements of the CIS Benchmark and CIS Controls. In addition, recommendations are associated with one or more Azure Policy definitions, and the compliance standard includes recommendations that aren't addressed by any Azure Policy definitions in blueprints at this time. Therefore, compliance in Azure Policy will only consist of a partial view of your overall compliance status.  Customers are ultimately responsible for meeting the compliance requirements applicable to their environments and must determine for themselves whether particular information helps meet their compliance needs.

Learn more about the CIS Microsoft Azure Foundation Benchmark blueprint in our documentation.

The new Microsoft Edge now available for download

Announcing Experimental Mobile Blazor Bindings

$
0
0

Today I’m excited to announce a new experimental project to enable native mobile app development with Blazor: Experimental Mobile Blazor Bindings. These bindings enable developers to build native mobile apps using C# and .NET for iOS and Android using familiar web programming patterns. This means you can use the Blazor programming model and Razor syntax to define UI components and behaviors of an application. The UI components that are included are based on Xamarin.Forms native UI controls, which results in beautiful native mobile apps.

Here is a sample Counter component, which may look familiar to Blazor developers, that increments a value on each button press:

<StackLayout>
    <Label FontSize="30"
           Text="@("You pressed " + count + " times")" />
    <Button Text="+1"
            OnClick="@HandleClick" />
</StackLayout>

@code {
    int count;

    void HandleClick()
    {
        count++;
    }
}

Notice that the Blazor model is present with code sitting side by side the user interface markup that leverages Razor syntax with mobile specific components. This will feel very natural for any web developer that has ever used Razor syntax in the past. Now with the Experimental Mobile Blazor Bindings you can leverage your existing web skills and knowledge to build native iOS and Android apps powered by .NET.

Here is the code above running in the Android Emulator:

Clicking increment button in Android emulator

Get started with Mobile Blazor Bindings

To get started, all you need is the .NET Core 3.0 or 3.1 SDK, Visual Studio or Visual Studio for Mac, and the ASP.NET and web development and Mobile development with .NET (Xamarin.Forms) workloads installed.

Install the templates by running this command from a command/shell window:

dotnet new -i Microsoft.MobileBlazorBindings.Templates::0.1.173-beta

And then create your first project by running this command:

dotnet new mobileblazorbindings -o MyApp

Open the solution (SLN file) in Visual Studio and mark either the Android or iOS project as the StartUp Project, which should look like this:

VS solution with shared UI, Android, and iOS projects

Now run your first Mobile Blazor Bindings app in a local emulator or on an attached mobile device! Don’t have one set up yet for development? No worries, the Xamarin documentation has all the details for you here:

For documentation and walkthroughs, check out the Mobile Blazor Bindings documentation.

Why Mobile Blazor Bindings now?

Many developers delight in using XAML and Xamarin.Forms to craft beautiful native mobile apps. We have heard from a set of developers that come from a web programming background that having web specific patterns to build mobile applications would be ideal for them. The goal of these bindings is to see if developers would like to have the option of writing markup and doing data binding for native mobile applications using the Blazor-style programming model with Razor syntax and features. Would you love to see this option in the box for future versions of Visual Studio?

Learn more

To learn more about Experimental Mobile Blazor Bindings, please check out these resources:

Give feedback

Please send us your feedback via issues in our GitHub repo and by completing a short survey about your experience and expectations.

We hope you try out this new framework and let us know your thoughts!

The post Announcing Experimental Mobile Blazor Bindings appeared first on ASP.NET Blog.

Upgrading to the new Microsoft Edge

$
0
0

The new Microsoft Edge is now out of preview and available for download, with today’s release of our first Stable channel build (Microsoft Edge 79 stable)You can download the new Microsoft Edge today at microsoft.com/edge. 

In this post, we’ll walk through what you can expect now that the new Edge channel is open – including how the update will roll out, how you can get started testing and what to expect from the preview channels going forward. 

The work of upgrading devices to the new Microsoft Edge across hundreds of millions of Windows PCs around the world won’t happen overnight. Our goal is to make this process as simple and non-intrusive as possible to deliver a great experience, while minimizing risk to users and organizations 

Installing the new Microsoft Edge 

You can get the new Microsoft Edge for Windows and macOS today by downloading it directly from microsoft.com/edge. When you install Microsoft Edge on an up-to-date Windows 10 deviceit will replace the previous (legacy) version on your device. In some cases, you may be prompted to install additional updates. Your favorites, passwords, and basic settings will carry over to the new Microsoft Edge automatically. Web apps (including those built on EdgeHTML)and Microsoft Edge preview channels (such as Dev or Canary) will continue to work without interruption.  

If you’re using Microsoft Edge on iOS or Android, you don’t need to take any action – your device will update automatically. 

Automatic rollout and update roadmap for consumers 

If you’d prefer not to install Microsoft Edge manuallyyou can wait for it to be installed in a future update to Windows 10following our measured roll-out approach over the next several months. We will start to migrate Windows 10 customers to the new Microsoft Edge in the coming weeks, starting with a subset of Windows Insiders in the Release Preview ring 

Enterprise and education users will not be automatically upgraded at this time. Contact your administrator for more information on updating to the new Microsoft Edge in your organization. Administrators should refer to the “Enterprise updates and options” section below.  

The new Microsoft Edge will gradually be made available on Windows Update and offered to additional devices as data and feedback indicate that users are having a good experience. If you don’t want to wait, you can get the new Microsoft Edge at microsoft.com/edge. 

Whether you download today or wait for us to upgrade it on your device, your favorites, passwords, and basic settings will carry over to the new Microsoft Edge automaticallyThe automatic rollout will maintain your default browser setting – if your default is currently set to a browser other than Microsoft Edge, your setting will carry over once the new Microsoft Edge is installed.  

Once you’ve installed Microsoft Edge, it will update independently on a roughly six-week cadence. You can always preview the next major update via the Beta channel—for example, Microsoft Edge 80 will enter the Beta channel soon, and is expected to release to Stable in February. You can learn more about Microsoft Edge preview channels in our previous blog post, What to expect in the new Microsoft Edge Insider channels. 

Enterprise updates and options 

Organizations are in full control of when the new Microsoft Edge will be deployed to their managed devices. Managed devices will not be automatically updated to the new Microsoft Edge. In addition to managed devices, Enterprise, Education, and Workstation Pro Edition devices will not be automatically updated at this time.  Organizations that would like to block the automatic delivery of the new Microsoft Edge to devices on Home and Pro Editions with Windows Update enabled can do so either via policies or by downloading and deploying the Blocker Toolkit Note that Internet Explorer is not impacted by our automatic rollout. 

When you are ready to deploy the new Microsoft Edge, you can learn more about rolling out and managing Microsoft Edge across your organization from our enterprise documentation, and you can download our offline deployment packages and administrative policy templates for configuring Microsoft Edge on Windows and macOS at our enterprise page. Eligible Microsoft 365 customers can also take advantage of Fast Track and App Assure support, launching in Q1 of 2020.    

Once you have deployed the new Microsoft Edge to your organization, you can configure or restrict updates using the Microsoft Edge Update policies. In the future, we plan to include Microsoft Edge built-in to Windows, to be delivered through a future Windows 10 Feature Update for all customers. 

For more guidance on deployment, check out this Microsoft Mechanics interview from Ignite, where host Jeremy Chapman interviews Chuck Friedman, CVP of Microsoft Edge engineering, and walks through deployment demos including Configuration Manager and new security baseline for Microsoft Edge 

Getting ready for the new Microsoft Edge 

Whether you’re just trying out the new Microsoft Edge for the first time, or have been with us on this journey over the last year, thank you for getting involved and helping make Microsoft Edge great. We’ve seen exciting momentum in the Chromium project over the last year, landing more than 1900 contributions across areas like accessibility, modern input including touch, speech, digital inking, and many more, and we couldn’t be more excited for what’s next. 

Enterprise administrators and IT professionals can learn more about deploying, managing, and configuring the new Microsoft Edge in your organization at our new enterprise page 

Web developers can find guidance on incorporating Microsoft Edge into your test matrix in our recent blog post, “Getting your sites ready for the new Microsoft Edge,” as well as more information on new platform capabilities, developer tools, web apps, and more in our web developer documentation. 

Happy browsing! 

Kyle Pflug, Senior PM Lead, Microsoft Edge 

The post Upgrading to the new Microsoft Edge appeared first on Microsoft Edge Blog.

Creating a more accessible world with Azure AI

$
0
0

At Microsoft, we are inspired by how artificial intelligence is transforming organizations of all sizes, empowering them to reimagine what’s possible. AI has immense potential to unlock solutions to some of society’s most pressing challenges.

One challenge is that according to the World Health Association, globally, only 1 in 10 people with a disability have access to assistive technologies and products. We believe that AI solutions can have a profound impact on this community. To meet this need, we aim to democratize AI to make it easier for every developer to build accessibility into their apps and services, across language, speech, and vision.

In view of the upcoming Bett Show in London, we’re shining a light on how Immersive Reader enhances reading comprehension for people regardless of their age or ability, and we’re excited to share how Azure AI is broadly enabling developers to build accessible applications that empower everyone.

Empowering readers of all abilities

Immersive Reader is an Azure Cognitive Service that helps users of any age and reading ability with features like reading aloud, translating languages, and focusing attention through highlighting and other design elements. Millions of educators and students already use Immersive Reader to overcome reading and language barriers.

The Young Women’s Leadership School of Astoria, New York, brings together an incredible diversity of students with different backgrounds and learning styles. The teachers at The Young Women’s Leadership School support many types of learners, including students who struggle with text comprehension due to learning differences, or language learners who may not understand the primary language of the classroom. The school wanted to empower all students, regardless of their background or learning styles, to grow their confidence and love for reading and writing.

A teacher and student looking at a computer together

Watch the story here

Teachers at The Young Women’s Leadership School turned to Immersive Reader and an Azure AI partner, Buncee, as they looked for ways to create a more inclusive and engaging classroom. Buncee enables students and teachers to create and share interactive multimedia projects. With the integration of Immersive Reader, students who are dyslexic can benefit from features that help focus attention in their Buncee presentations, while those who are just learning the English language can have content translated to them in their native language.

Like Buncee, companies including Canvas, Wakelet, ThingLink, and Nearpod are also making content more accessible with Immersive Reader integration. To see the entire list of partners, visit our Immersive Reader Partners page. Discover how you can start embedding Immersive Reader into your apps today. To learn more about how Immersive Reader and other accessibility tools are fostering inclusive classrooms, visit our EDU blog.

Breaking communication barriers

Azure AI is also making conversations, lectures, and meetings more accessible to people who are deaf or hard of hearing. By enabling conversations to be transcribed and translated in real-time, individuals can follow and fully engage with presentations.

The Balavidyalaya School in Chennai, Tamil Nadu, India teaches speech and language skills to young children who are deaf or hard of hearing. The school recently held an international conference with hundreds of alumni, students, faculty, and parents. With live captioning and translation powered by Azure AI, attendees were able to follow conversations in their native languages, while the presentations were given in English.

Learn how you can easily integrate multi-language support into your own apps with Speech Translation, and see the technology in action with Translator, with support for more than 60 languages, today.

Engaging learners in new ways

We recently announced the Custom Neural Voice capability of Text to Speech, which enables customers to build a unique voice, starting from just a few minutes of training audio.

The Beijing Hongdandan Visually Impaired Service Center leads the way in applying this technology to empower users in incredible ways. Hongdandan produces educational audiobooks featuring the voice of Lina, China’s first blind broadcaster, using Custom Neural Voice. While creating audiobooks can be a time-consuming process, Custom Neural Voice allows Lina to produce high-quality audiobooks at scale, enabling Hongdandan to support over 105 schools for the blind in China like never before.

“We were amazed by how quickly Azure AI could reproduce Lina's voice in such a natural-sounding way with her speech data, enabling us to create educational audiobooks much more quickly. We were also highly impressed by Microsoft's commitment to protecting Lina's voice and identity."—Xin Zeng, Executive Director at Hongdandan

Learn how you can give your apps a new voice with Text to Speech.

Making the world visible for everyone

According to the International Agency for the Prevention of Blindness, more than 250 million people are blind or have low vision across the globe. Last month, in celebration of the United Nations International Day of Persons with Disabilities, Seeing AI, a free iOS app that describes nearby people, text, and objects, expanded support to five new languages. The additional language support for Spanish, Japanese, German, French, and Dutch makes it possible for millions of blind or low vision individuals to read documents, engage with people around them, hear descriptions of their surroundings in their native language, and much more. All of this is made possible with Azure AI.

Try Seeing AI today or extend vision capabilities to your own apps using Computer Vision and Custom Vision.

Get involved

We are humbled and inspired by what individuals and organizations are accomplishing today with Azure AI technologies. We can’t wait to see how you will continue to build on these technologies to unlock new possibilities and design more accessible experiences. Get started today with a free trial.

Check out our AI for Accessibility program to learn more about how companies are harnessing the power of AI to amplify capabilities for the millions of people around the world with a disability.

Microsoft Sustainability Calculator helps enterprises analyze the carbon emissions of their IT infrastructure

$
0
0

an industry wind farm

For more than a decade, Microsoft has been investing to reduce environmental impact while supporting the digital transformation of organizations around the world through cloud services. We strive to be transparent with our commitments, evidenced by our announcement that Microsoft’s cloud datacenters will be powered by 100 percent renewable energy sources by 2025. The commitments and investments we make as a company are important steps in reducing our own environmental impact, but we recognize that the opportunity for positive change is greatest by empowering customers and partners to achieve their own sustainability goals.

An industry first—the Microsoft Sustainability Calculator

Today we’re announcing the availability of the Microsoft Sustainability Calculator, a Power BI application for Azure enterprise customers that provides new insight into carbon emissions data associated with their Azure services. Migrating from traditional datacenters to cloud services significantly improves efficiencies, however, enterprises are now looking for additional insights into the carbon impact of their cloud workloads to help them make more sustainable computing decisions. For the first time, those responsible for reporting on and driving sustainability within their organizations will have the ability to quantify the carbon impact of each Azure subscription over a period of time and datacenter region, as well as see estimated carbon savings from running those workloads in Azure versus on-premises datacenters. This data is crucial for reporting existing emissions and is the first step in establishing a foundation to drive further decarbonization efforts.

Microsoft Sustainability Calculator carbon data visualization view

Providing transparency with rigorous methodology

The tool’s calculations are based on a customer’s Azure consumption, informed by the research in the 2018 whitepaper, “The Carbon Benefits of Cloud Computing: a Study of the Microsoft Cloud”, and have been independently verified by Apex, a leading environmental verification body. The calculator factors in inputs such as the energy requirements of the Azure service, the energy mix of the electric grid serving the hosting datacenters, Microsoft’s procurement of renewable energy in those datacenters, as well as the emissions associated with the transfer of data over the internet. The result is an estimate of the greenhouse gas (GHG) emissions, measured in total metric tons of carbon equivalent (MTCO2e) related to a customer’s consumption of Azure.

The calculator gives a granular view of the estimated emissions savings from running workloads on Azure by accounting for Microsoft’s IT operational efficiency, IT equipment efficiency, and datacenter infrastructure efficiency compared to that of a typical on-premises deployment. It also estimates the emissions savings attributable to a customer from Microsoft’s purchase of renewable energy.
   Microsoft Sustainability Calculator - Reporting

We also understand customers want transparency into the specific commitments we are making to build a more sustainable cloud. To make that information easily accessible, we’ve built a view within the tool of the renewable energy projects that Microsoft has invested in as part of its carbon neutral and renewable energy commitments. Each year Microsoft purchases renewable energy to cover its annual cloud consumption. Customers can use the world map to learn about projects in regions where they consume Azure services or have a regional presence. The projects are examples of the investments that Microsoft has made since 2012.

A path to actionable insight

Azure enterprise customers can get started by downloading the Microsoft Sustainability Calculator from AppSource now and following the included setup instructions. We’re excited by the opportunity this new tool provides for our customers to gain a deeper understanding of their current infrastructure and drive meaningful sustainability conversations within their organizations. We see this as a first step and plan to deepen and expand the tool’s capabilities in the future. We know our customers would like an even more comprehensive view of the sustainability benefits of our cloud services and look forward to supporting and enabling them in their journey.

Azure Data Explorer and Stream Analytics for anomaly detection

$
0
0

Anomaly detection plays a vital role in many industries across the globe, such as fraud detection for the financial industry, health monitoring in hospitals, fault detection and operating environment monitoring in the manufacturing, oil and gas, utility, transportation, aviation, and automotive industries.

Anomaly detection is about finding patterns in data that do not conform to expected behavior. It is important for decision-makers to be able to detect them and take proactive actions if needed. Using the oil and gas industry as one example, deep-water rigs with various equipment are intensively monitored by hundreds of sensors that send measurements in various frequencies and formats. Analysis or visualization is hard using traditional software platforms, and any non-productive time on deep-water oil rig platforms caused by the failure to detect anomaly could mean large financial losses each day.

Companies need new technologies like Azure IoT, Azure Stream Analytics, Azure Data Explorer and machine learning to ingest, processes, and transform data into strategic business intelligence to enhance exploration and production, improve manufacturing efficiency, and ensure safety and environmental protection. These managed services also help customers dramatically reduce software development time, accelerate time to market, provide cost-effectiveness, and achieve high availability and scalability.

While the Azure platform provides lots of options for anomaly detection and customers can choose the technology that best suits their needs, customers also brought questions to field facing architects on what use cases are most suitable for each solution. We’ll examine the answers to these questions below, but first, you’ll need to know a couple definitions:

What is a time series? A time series is a series of data points indexed in time order. In the oil and gas industry, most equipment or sensor readings are sequences taken at successive points in time or depth.

What is decomposition of additive time series? Decomposition is the task to separate a time series into components as shown on the graph below.

Decomposition is the task to separate a time series into components

Time-series forecasting and anomaly detection

A graph showing a time series with forecasting.

Anomaly detection is the process to identify observations that are different significantly from majority of the datasets.

A graph showing an anomaly detection example.

This is an anomaly detection example with Azure Data Explorer.

  • The red line is the original time series.
  • The blue line is the baseline (seasonal + trend) component.
  • The purple points are anomalous points on top of the original time series.

To detect anomalies, either Azure Stream Analytics or Azure Data Explorer can be used for real-time analytics and detection as illustrated in the diagram below.

A diagram showing an Azure powered pattern for real-time analytics.

Azure Stream Analytics is an easy-to-use, real-time analytics service that is designed for mission-critical workloads. You can build an end-to-end serverless streaming pipeline with just a few clicks, go from zero to production in minutes using SQL, or extend it with custom code and built-in machine learning capabilities for more advanced scenarios.

Azure Data Explorer is a fast, fully managed data analytics service for near real-time analysis on large volumes of data streaming from applications, websites, IoT devices, and more. You can ask questions and iteratively explore data on the fly to improve products, enhance customer experiences, monitor devices, boost operations, and quickly identify patterns, anomalies, and trends in your data.

Azure Stream Analytics or Azure Data Explorer?

Use Case

Stream Analytics is for continuous or streaming real-time analytics, with aggregate functions support hopping, sliding, tumbling, or session windows. It will not suit your use case if you want to write UDFs or UDAs in languages other than JavaScript or C#, or if  your solution is in a multi-cloud or on-premises environment.

Data Explorer is for on-demand or interactive near real-time analytics, data exploration on large volumes of data streams, seasonality decomposition, ad hoc work, dashboards, and root cause analyses on data from near real-time to historical. It will not suit you use case if you need to deploy analytics onto the edge.

Forecasting

You can set up a Stream Analytics job that integrates with Azure Machine Learning Studio.

Data Explorer provides native function for forecasting time series based on the same decomposition model. Forecasting is useful for many scenarios like preventive maintenance, resource planning, and more.

Seasonality

Stream Analytics does not provide seasonality support, with the limitation of sliding windows size.

Data Explorer provides functionalities to automatically detect the periods in the time series or allows you to verify that a metric should have specific distinct period(s) if you know them.

Decomposition

Stream Analytics does not support decomposition.

Data Explorer provides function which takes a set of time series and automatically decomposes each time series to its seasonal, trend, residual, and baseline components.

Filtering and Analysis

Stream Analytics provides functions to detect spikes and dips or change points.

Data Explorer provides analysis to finds anomalous points on a set of time series, and a root cause analysis (RCA) function after anomaly is detected.

Filtering

Stream Analytics provides a filter with reference data, slow-moving, or static.

Data Explorer provides two generic functions:
•    Finite impulse response (FIR) which can be used for moving average, differentiation, shape matching
•    Infinite impulse response (IIR) for exponential smoothing and cumulative sum

Anomaly Detection

Stream Analytics provides detections for:
•    Spikes and dips (temporary anomalies)
•    Change points (persistent anomalies such as level or trend change)

Data Explorer provides detections for:
•    Spikes & dips, based on enhanced seasonal decomposition model (supporting automatic seasonality detection, robustness to anomalies in the training data)
•    Changepoint (level shift, trend change) by segmented linear regression
•    KQL Inline Python/R plugins enable extensibility with other models implemented in Python or R

What's next?

Azure Data Analytics, in general, brings you the best of breed technologies for each workload. The new Real-Time Analytics architecture (shown above) allows leveraging the best technology for each type of workload for stream and time-series analytics including anomaly detection. The following is a list of resources that may help you get started quickly:


Bing partners with the ecosystem to drive fresh signals

$
0
0

Bing Webmaster Tools had launched the Adaptive URL submission capability that allowed webmasters to submit up to 10,000 URLs using the online toolkit through Bing webmaster portal (Submit URLs option) or in batch mode using Batch API. Since launch we have seen high adoption rate by large websites as well as small and medium websites.  

We have been working with multiple partners to further Bing’s vision to drive fundamental shift in how search engines find content using direct notification from websites whenever content is created or updated.  

A few examples to note: During the recent SEO Conference (Pubcon Pro, Las Vegas) Linkedin.com and love2dev.com showcased how they used the URL Submission API, ensuring search users find timely, relevant and trustworthy information on Bing.  

Similarly, we have been working with leading SEO platforms like Botify, to integrate URL Submission API in their product offerings. This integration is an expansion to Botify’s new FastIndex solution, the first within the Botify Activation product suite, the partnership builds upon Bing’s new programmatic URL submission process. For more information please refer to the annoncement from Botify.

URL Submission API is programmed to review the site performance that are registered on Bing webmaster tools and adaptively increase their daily quota for submitting content to Bing. We encourage websites to register on Bing webmaster tools using standard methods or Google Search Console Import or Domain Connect based verification.  

Apart from integration of URL Submission API, Botify is participating in Bing’s content submission API pilot, which allows for the direct push of HTML, images, and other site content to the search engine, reducing the need for crawling.  

Please refer the documentation for Easy set-up guide, Batch URL Submission APIcURL code example, for more details on URL Submission API.  

We are happy to bring in more partners to accelerate this shift in content discovery.  

Thanks! 
Bing Webmaster Tools Team 

My Interview and Podcast Production Process on the Hanselminutes Podcast

$
0
0

artwork 300x300Hey! Did you know I have a podcast? A few actually but Hanselminutes has been doing for over 700 episodes over 13 years and it's pretty good if I may say so myself. It's a 30 min show meant for your commute. It offers fresh faces and a fresh perspective on lots of topics. While it's often tech and programming-focused, I do often have guests on to talk about less techie things like relationships, mental health, life hacks and more. I model the show after Fresh Air with Terry Gross.

I recently got a tweet from Xi Xaio asking how I host my show. The planning, the content, the restricted timing, the energy, avoiding wasted time and words, etc. Getting a good question is a gift as it leads to a blog post! So thank you Xi for this gift.

If you work for NPR, you're welcome to put all 350 hours of the show on any public radio station. I'm also available to host Fresh Air or, ahem, Science Friday, and I'd do a good job at it.

Here are Xi's questions and my answers. You might also like my article How to start your first podcast - equipment, editing, publishing and more as well.

How do you keep up the number of guests for a weekly podcast?

I haven’t had too much trouble as I just watch hacker news, Reddit, Twitter, etc and if I see someone cool I will invite them. I have 8 guests "in the can"right now so I like to stay a month or two ahead. I also prioritize quieter people. Lots of folks have a PR or press person (I get a dozen pitches a week) but the most interesting people aren't doing podcasts because they are making amazing art/tech. So I like to talk to them. I know I've gotten someone good when their response is "me? Why me?" Well, because you're making/thinking/commentating!

What drives you to keep publishing even when you are on holiday, for the promise of a new episode each week - for better audience engagement, or for the demands of the advertisers?

Consistency is key and king. If you publish regularly people start to (consciously or unconsciously) come to expect it. You can fit into their life when they know your show is every week, for example. Others “publish when they can” and that means their show has no heartbeat and can’t be counted on. Life is a marathon, not a sprint, and step one is showing up. I like to show up every week. When I took a few months off last year to stay in South Africa, I had 12 shows already recorded and scheduled before I left.

You introduce the guest on their behalf. Why not let guests do it themselves?

Because most people aren’t good at introducing themselves, advocating for themselves, or talking about themselves. I like to take a moment, be consistent and talk them up. It starts the show well because it reminds them they are awesome!

You keep the episode length within 30 mins. Guests are different, some keep talking and some are succinct. How do you achieve this goal?

A typical show has 6 bullet points, 5 minutes each, as I plan the content. I'll do a lot of research (think 50 tabs open, etc) and then I work out the story arc (where do we want to take the audience) with the guest ahead of time, and I optimize the show and conversation for that process.

We bounce bullet points back and forth over email for a while or have a preliminary Skype/Facetime.

Would you mind sharing your content producing procedures after recording? I'd love to learn what steps you take from editing to publishing, and tips to be more efficient.

I store everything in a workflow of folders in Dropbox. I have an “input raw shows” folder and an “output produced shows” folder. I use zencastr to record, and the result is a WAV file for each speaker. Then my paid producer Mandy will level the audio, edit and merge them in Audacity, then add the music, produce the MP3, add the ID3tags, and put the result in the output folder. Then she uploads it to Simplecast and schedules the show for Thursday. My custom-built podcast site then pulls the show from the Simplecast REST API and it shows up at http://hanselminutes.com.

In addition to your perseverance, what other recommendations do you have to new tech podcast hosts, like me?

Perseverance is key. No one listened to my first hundred shows. Do this for yourself first, and the audience later. 

Also, audio quality is everything. If it’s low or bad or hard to hear you’ll lose audiences. One other tip, as you get better as an interviewer the less you’ll have too edit, which will save you time. If you mess up, stop. Clap, then start again. The clap makes it easy to see the mistake (it'll be a spike on the audio waveform) and then you can do a "pull up" and just elide that portion.

What do you mean by "I optimize the show and conversation for that process"

The point of a story is the story arc. You can't just randomly chat with folks, you need to have a plan and a direction. Where are you taking the listener? How will you get them there? Are you being empathic and putting yourself in the shoes of the listener? What do they know, what do they not know?

How much should you talk?

Less. It's not about me or you, it's about the guest. I play a role. I play the foil. What is a foil?

foil - a person or thing that contrasts with and so emphasizes and enhances the qualities of another.

Here is a real show. I'm in green. I'm there to ask YOUR questions (as you're not there!) and advocate for the listener. Whether or not I know the answer or not isn't important. I'm there to expand acronyms, provide context, and guide the journey.

Talk less, listen more

Do you have a podcast? Leave a link below and share YOUR process!


Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!



© 2019 Scott Hanselman. All rights reserved.
     

Announcing: Visual Studio for Mac: Refresh(); event on February 24

$
0
0

Join us online on February 24th for the Visual Studio for Mac Refresh(); event!

Visual Studio for Mac: Refresh(); event

We’ve been hard at work making Visual Studio for Mac a great environment for building .NET Core applications. Recently, we’ve added .NET Core 3.1 support, ASP.NET Core scaffolding, Blazor support and more. It’s a great time to take a deep dive into .NET development, including games, web, mobile and cloud using Visual Studio for Mac.

To get you up to speed with all the latest features and capabilities we’re hosting the Visual Studio for Mac: Refresh(); event on February 24th at 9:00 AM Pacific. Join us online for a day full of demos and conversations around Visual Studio for Mac, web, mobile and game development using .NET.

Save the date, and stay tuned for more event details and the full agenda at the event website. We look forward to seeing you on February 24!

The post Announcing: Visual Studio for Mac: Refresh(); event on February 24 appeared first on Visual Studio Blog.

.NET everywhere apparently also means Windows 3.11 and DOS

$
0
0

I often talk about how .NET Core is open source and runs "everywhere." MonoGame, Unity, Apple Watches, Raspberry Pi, and Microcontrollers (as well as a dozen Linuxes, Windows, etc) is a lot of places.

Michal Strehovský wants C# to run EVERYWHERE and I love him for it.

C# running on Windows 3.11

He recently got some C# code running in two "impossible" places that are now added to our definition of everywhere. While these are fun experiments (don't do this in production) it does underscore the flexibility of both Michals' technical abilities and the underlying platform.

Running C# on Windows 3.11

In this 7 tweet thread Michael talks about how he got C# running in Windows 3.11. The app is very simple, just calling MessageBoxA which has been in Windows since Day 1. He's using DllImport/PInvoke to call MessageBox and receive its result.

I'm showing this Windows 3.11 app first because it's cool, but he started where his DOS experiment left off. He's compiling C# native code, and once that's done you can break all kinds of rules.

In this example he's running Win16...not Win32. However (I was alive and coding and used this on a project!) in 1992 there was a bridge technology called Win32s that was a subset of APIs that were in Windows NT and were backported to Windows 3.11 in the form of Win32s. Given some limitations, you could write 32 bit code and thunk from Win16 to Win32.

Michal learned that the object files that CoreTR's AOT (ahead of time) compiler in 2020 can be linked with the 1994 linker from Visual C++ 2.0. The result is native code that links up with Win32s that runs in 16-bit (ish) Windows 3.11. Magical. Kudos Michal.

Simple Hello World C# app

Running C# in 8kb on DOS

I've blogged about self-contained .NET Core 3.x executables before and I'm a huge fan. I got my app down to 28 megs. It's small by some measurements, given that it includes the .NET runtime and a lot of accoutrements. Certainly one shouldn't judge a VM/runtime by its hello world size, but Michal wanted to see how small he could go - with 8000 bytes as the goal!

He's using text-mode which I think is great. He also removes the need for the garbage collector by using a common technique - no allocations allowed. That means you can't use new anywhere. No reference types.

He uses things like "fixed char[]" fields to declare fixed arrays, remembering they must live on the stack and the stack is small.

Of course, when you dotnet publish something self-contained, you'll initially get a 65 meg ish EXE that includes the app, the runtime, and the standard libraries.

dotnet publish -r win-x64 -c Release

He can use ILLinker and PublishedTrimmed to use .NET Core 3.x's Tree Trimming, but that gets it down to 25 megs.

He tries using Mono and mkbundle and that gets him down to 18.2 megs but then he hits a bug. And he's still got a runtime.

So the only runtime that isn't a runtime is CoreRT which includes no virtual machine, just functions to support you.

dotnet publish -r win-x64 -c Release /p:Mode=CoreRT

And this gets him to 4.7 megs, but still too big. Some tweaks go to about 3 megs. He can pull out reflection entirely and get to 1.2 megs! It'll fit on a floppy now!

dotnet publish -r win-x64 -c Release /p:Mode=CoreRT-ReflectionFree

This one megabyte size seems to be a hardish limit with just the .NET SDK.

Here's where Michal goes off the rails. He makes a stub reimplementation of the  System base types! Then recompiles with some magic switches to get an IL only version of the EXE

csc.exe /debug /O /noconfig /nostdlib /runtimemetadataversion:v4.0.30319 MiniBCL.cs GameFrameBuffer.cs GameRandom.cs GameGame.cs GameSnake.cs PalThread.Windows.cs PalEnvironment.Windows.cs PalConsole.Windows.cs /out:zerosnake.ilexe /langversion:latest /unsafe

Then he feeds that to CoreIT to get the native code

ilc.exe zerosnake.ilexe -o zerosnake.obj --systemmodule zerosnake --Os -g

yada yada yada and he's now here

"Now we have zerosnake.obj — a standard object file that is no different from object files produced by other native compilers such as C or C++. The last step is linking it."

A few more tweaks at he's at 27kb! He then pulls off a few linker switches to disable and strip various things - using the same techniques that native developers use and the result is 8176 bytes. Epic.

link.exe /debug:full /subsystem:console zerosnake.obj /entry:__managed__Main kernel32.lib ucrt.lib /merge:.modules=.rdata /merge:.pdata=.rdata /incremental:no /DYNAMICBASE:NO /filealign:16 /align:16

a

What's the coolest and craziest place you've ever run .NET code? Go follow Michal on Twitter and give him some applause.


Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!



© 2019 Scott Hanselman. All rights reserved.
     

MLOps—the path to building a competitive edge

$
0
0

Enterprises today are transforming their businesses using Machine Learning (ML) to develop a lasting competitive advantage. From healthcare to transportation, supply chain to risk management, machine learning is becoming pervasive across industries, disrupting markets and reshaping business models.

Organizations need the technology and tools required to build and deploy successful Machine Learning models and operate in an agile way. MLOps is the key to making machine learning projects successful at scale. What is MLOps ? It is the practice of collaboration between data science and IT teams designed to accelerate the entire machine lifecycle across model development, deployment, monitoring, and more. Microsoft Azure Machine Learning enables companies to fully embrace MLOps practices will and truly be able to realize the potential of AI in their business.

One great example of a customer transforming their business with Machine Learning and MLOps is TransLink. They support Metro Vancouver's transportation network, serving 400 million total boarding’s from residents and visitors as of 2018. With an extensive bus system spanning 1,800 sq. kilometers, TransLink customers depend heavily on accurate bus departure times to plan their journeys.

To enhance customer experience, TransLink deployed 18,000 different sets of Machine Learning models to better predict bus departure times that incorporate factors like traffic, bad weather, and other schedule disruptions. Using MLOps with Azure Machine Learning they were able to manage and deliver the models at scale.

“With MLOps in Azure Machine Learning, TransLink has moved all models to production and improved predictions by 74 percent, so customers can better plan their journey on TransLink's network. This has resulted in a 50 percent reduction on average in customer wait times at stops.”–Sze-Wan Ng, Director of Analytics & Development, TransLink.

Johnson Controls is another customer using Machine Learning Operations at scale. For over 130 years, they have produced fire, HVAC and security equipment for buildings. Johnson Controls is now in the middle of a smart city revolution, with Machine Learning being a central aspect of their equipment maintenance approach.

Johnson Controls runs thousands of chillers with 70 different types of sensors each, streaming terabytes of data. MLOps helped put models into production in a timely fashion, with a repeatable process, to deliver real-time insights on maintenance routines. As a result, chiller shutdowns could be predicted days in advance and mitigated effectively, delivering cost savings and increasing customer satisfaction.

“Using the MLOps capabilities in Azure Machine Learning, we were able to decrease both mean time to repair and unplanned downtime by over 66 percent, resulting in substantial business gains.”–Vijaya Sekhar Chennupati, Applied Data Scientist at Johnson Controls

Getting started with MLOps

To take full advantage of MLOps, organizations need to apply the same rigor and processes of other software development projects.

To help organizations with their machine learning journey, GigaOm developed the MLOps vision report that includes best practices for effective implementation and a maturity model.

Maturity is measured through five levels of development across key categories such as strategy, architecture, modeling, processes, and governance. Using the maturity model, enterprises can understand where they are and determine what steps to take to ‘level up’ and achieve business objectives.

 

Building MLOps maturity

 

“Organizations can address the challenges of developing AI solutions by applying MLOps and implementing best practices. The report and MLOps maturity model from GigaOm can be a very valuable tool in this journey,”– Vijaya Sekhar Chennupati, Applied Data Scientist at Johnson Controls.

To learn more, read the GigaOm report and make machine learning transformation a reality for your business.

More information

MSC Mediterranean Shipping Company on Azure Site Recovery, “ASR worked like magic”

$
0
0

Today’s Q&A post covers an interview between Siddharth Deekshit, Program Manager, Microsoft Azure Site Recovery engineering and Quentin Drion, IT Director of Infrastructure and Operations, MSC. MSC is a global shipping and logistics business, our conversation focused on their organization’s journey with Azure Site Recovery (ASR). To learn more about achieving resilience in Azure, refer to this whitepaper.

I wanted to start by understanding the transformation journey that MSC is going through, including consolidating on Azure. Can you talk about how Azure is helping you run your business today?

We are a shipping line, so we move containers worldwide. Over the years, we have developed our own software to manage our core business. We have a different set of software for small, medium, and large entities, which were running on-premises. That meant we had to maintain a lot of on-premises resources to support all these business applications. A decision was taken a few years ago to consolidate all these business workloads inside Azure regardless of the size of the entity. When we are migrating, we turn off what we have on-premises and then start using software hosted in Azure and provide it as a service for our subsidiaries. This new design is managed in a centralized manner by an internal IT team.

That’s fantastic. Consolidation is a big benefit of using Azure. Apart from that, what other benefits do you see of moving to Azure?

For us, automation is a big one that is a huge improvement, the capabilities in terms of API in the integration and automation that we can have with Azure allows us to deploy environments in a matter of hours where before that it took much, much longer as we had to order the hardware, set it up, and then configure. Now we no longer need to worry about the set up as well as hardware support, and warranties. The environment is all virtualized and we can, of course, provide the same level of recovery point objective (RPO), recovery time objective (RTO), and security to all the entities that we have worldwide.

Speaking of RTO and RPO, let’s talk a little bit about Site Recovery. Can you tell me what life was like before using Site Recovery?

Actually, when we started migrating workloads, we had a much more traditional approach, in the sense that we were doing primary production workloads in one Azure region, and we were setting up and managing a complete disaster recovery infrastructure in another region. So the traditional on-premises data center approach was really how we started with disaster recovery (DR) on Azure, but then we spent the time to study what Site Recovery could provide us. Based on the findings and some testing that we performed, we decided to change the implementation that we had in place for two to three years and switch to Site Recovery, ultimately to reduce our cost significantly, since we no longer have to keep our DR Azure Virtual Machines running in another region. In terms of management, it's also easier for us. For traditional workloads, we have better RPO and RTO than we saw with our previous approach. So we’ve seen great benefits across the board.

That’s great to know. What were you most skeptical about when it came to using Site Recovery? You mentioned that your team ran tests, so what convinced you that Site Recovery was the right choice?

It was really based on the tests that we did. Earlier, we were doing a lot of manual work to switch to the DR region, to ensure that domain name system (DNS) settings and other networking settings were appropriate, so there were a lot of constraints. When we tested it compared to this manual way of doing things, Site Recovery worked like magic. The fact that our primary region could fail and that didn’t require us to do a lot was amazing. Our applications could start again in the DR region and we just had to manage the upper layer of the app to ensure that it started correctly. We were cautious about this app restart, not because of the Virtual Machine(s), because we were confident that Site Recovery would work, but because of our database engine. We were positively surprised to see how well Site Recovery works. All our teams were very happy about the solution and they are seeing the added value of moving to this kind of technology for them as operational teams, but also for us in management to be able to save money, because we reduced the number of Virtual Machines that we had that were actually not being used.

Can you talk to me a little bit about your onboarding experience with Site Recovery?

I think we had six or seven major in house developed applications in Azure at that time. We picked one of these applications as a candidate for testing. The test was successful. We then extended to a different set of applications that were in production. There were again no major issues. The only drawback we had was with some large disks. Initially, some of our larger disks were not supported. This was solved quickly and since then it has been, I would say, really straightforward. Based on the success of our testing, we worked to switch all the applications we have on the platform to use Site Recovery for disaster recovery.

Can you give me a sense of what workloads you are running on your Azure Virtual Machines today? How many people leverage the applications running on those Virtual Machines for their day job?

So it's really core business apps. There is, of course, the main infrastructure underneath, but what we serve is business applications that we have written internally, presented to Citrix frontend in Azure. These applications do container bookings, customer registrations, etc. I mean, we have different workloads associated with the complete process of shipping. In terms of users, we have some applications that are being used by more than 5,000 people, and more and more it’s becoming their primary day-to-day application.

Wow, that’s a ton of usage and I’m glad you trust Site Recovery for your DR needs. Can you tell me a little bit about the architecture of those workloads?

Most of them are Windows-based workloads. The software that gets the most used worldwide is a 3-tier application. We have a database on SQL, a middle-tier server, application server, and also some web frontend servers. But for the new one that we have developed now, it's based on microservices. There are also some Linux servers being used for specific usage.

Tell me more about your experience with Linux.

Site Recovery works like a charm with Linux workloads. We only had a few mistakes in the beginning, made on our side. We wanted to use a product from Red Hat called Satellite for updates, but we did not realize that we cannot change the way that the Virtual Machines are being managed if you want to use Satellite. It needs to be defined at the beginning otherwise it's too late. But besides this, the ‘bring your own license’ story works very well and especially with Site Recovery.

Glad to hear that you found it to be a seamless experience. Was there any other aspect of Site Recovery that impressed you, or that you think other organizations should know about?

For me, it's the capability to be able to perform drills in an easy way. With the more traditional approach, each time that you want to do a complete disaster recovery test, it's always time and resource-consuming in terms of preparation. With Site Recovery, we did a test a few weeks back on the complete environment and it was really easy to prepare. It was fast to do the switch to the recovery region, and just as easy to bring back the workload to the primary region. So, I mean for me today, it's really the ease of using Site Recovery.

If you had to do it all over again, what would you do differently on your Site Recovery Journey?

I would start to use it earlier. If we hadn’t gone with the traditional active-passive approach, I think we could have saved time and money for the company. On the other hand, we were in this way confident in the journey. Other than that, I think we wouldn’t have changed much. But what we want to do now, is start looking at Azure Site Recovery services to be able to replicate workloads running on on-premises Virtual Machines in Hyper-V. For those applications that are still not migrated to Azure, we want to at least ensure proper disaster recovery. We also want to replicate some VMware Virtual Machines that we still have as part of our migration journey to Hyper-V. This is what we are looking at.

Do you have any advice for folks for other prospective or current customers of Site Recovery?

One piece of advice that I could share is to suggest starting sooner and if required, smaller. Start using Site Recovery even if it's on one small app. It will help you see the added value, and that will help you convince the operational teams that there is a lot of value and that they can trust the services that Site Recovery is providing instead of trying to do everything on their own.

That’s excellent advice. Those were all my questions, Quentin. Thanks for sharing your experiences.

Learn more about resilience with Azure. 

Getting Started with Blazor Server Apps in Visual Studio for Mac

$
0
0

In Visual Studio 2019 for Mac v8.4 one of the big things that we added support for is developing Blazor Server Applications. In this post I’ll show you how you can get started building new Blazor Server applications with Visual Studio for Mac. Blazor lets you build interactive web UIs using C# instead of JavaScript. Blazor apps are composed of reusable web UI components implemented using C#, HTML, and CSS. Both client and server code are written in C#, allowing you to share code and libraries.

Creating a new Blazor Server Project

When you first launch Visual Studio for Mac you will see the dialog that follows:

To get started you will first click New to begin creating your new Blazor Server app. You can also use the menu option File->New Solution as shown below.

Once you’ve done that, the New Project Dialog will appear. To create a Blazor Server app we will go to the .NET Core -> App section and then select Blazor Server App:

After clicking Next, you’ll be prompted to select the .NET Core version. You can select the default value, .NET Core 3.1 at the time of this post, or change it to use a specific version. For Blazor apps, .NET Core 3.0 or newer is required. Once you’ve selected Next, you’ll get to the next page in the wizard where you will give your new project a name. I have named this new project HelloBlazor.

Now that we have configured our new project we can click Create (or hit the Return key) to create the project. After the project is created, it will be opened in the IDE. I have opened the Index.razor file in Visual Studio for Mac’s editor, which you can see in the screenshot below.

Now that the project has been created, the first thing that we should do is to run the application to ensure that everything is working as expected. You can start your new Blazor app with Run > Start Debugging or Run > Start without Debugging.

In this case, let’s go with Start without Debugging because it launches faster than a debug session, and we are not intending to do any debugging currently. To Start without Debugging you can use the menu option (shown in image above), or you can use the keyboard shortcut ⌥⌘⏎. When you start your application, it will be launched within the default system browser., You can change the launched browser by using the browser selector in the toolbar, shown in next image.

Let’s start this app with the keyboard shortcut for Start without Debugging. After the project is built, the app will be opened in a browser. Now that we have our project up and running, let’s have some fun and customize it a bit.

In the project that is created there is a Counter page where you can click a button to increment the count. Let’s modify this to page to enable the user to specify the increment value. We can do this by adding an input field onto the Counter page and binding it to a new increment property that is used to increment the counter. Take a look at the updated code for Counter.razor in the following screenshot.

If you would like to copy and paste the code into your project, a snippet is below.

@page "/counter"

<h1>Counter</h1>

<input type="number" min="1" step="1" @bind-value="increment" />
<p>Current count: @currentCount</p>

<button class="btn btn-primary" @onclick="IncrementCount">Click me</button>

@code {
    public int increment = 1;
    private int currentCount = 0;

    private void IncrementCount()
    {
        currentCount += increment;
    }
}

In the code shown above the lines indicated by an arrow are the new, or edited, lines of code. Here we have added a new input field (line 5) for users to configure the increment, we have added a new property increment (line 11) to store the increment value and lastly we have modified line 16 to use the increment value instead of the hard coded increment by 1.

To ensure that the changes we have made are working as expected, we will start a debugging session. Let’s set a breakpoint when currentCount is incremented, line 16. After setting that breakpoint, we will Start Debugging with the keyboard shortcut ⌘⏎. When the breakpoint is hit, we can verify that the value for increment is taken from the input field on the Counter page. The animated gif below shows; creating a breakpoint, debugging the application and inspecting the value for increment when the breakpoint is hit.

If all goes well, the increment value was taken from the input field in the Counter page, and the app is behaving correctly. Now that we’ve shown how you can create, edit and debug a Blazor Server app, it’s time to wrap up this post.

Recap and next steps

In this post we have shown how to create a new Blazor Server application and work with it in Visual Studio for Mac. If you haven’t already, download Visual Studio for Mac to get started. If you are an existing Visual Studio for Mac user, update Visual Studio for Mac to version 8.4 or newer to get support for Blazor Server apps. In addition to developing Blazor Server apps, you can also publish them to Azure App Services.

If you have any issues while working in Visual Studio for Mac, please Report a Problem so that we can improve the product. Before we go, here are some additional resources for you.

Additional Resources

To learn more about the changes in Visual Studio 2019 for Mac v8.4, take a look at the v8.4 release blog post.

Join us for our upcoming Visual Studio for Mac: Refresh() event on February 24 for deep dive sessions into .NET development using Visual Studio for Mac, including a full session on developing Blazor applications.

For more info on Blazor a good starting point is Introduction to ASP.NET Core Blazor.

For another guide on creating a Blazor Server application in Visual Studio for Mac head over to the docs at Create Blazor web apps.

Make sure to follow us on Twitter at @VisualStudioMac and reach out to the team. Customer feedback is important to us and we would love to hear your thoughts. Alternatively, you can head over to Visual Studio Developer Community to track your issues, suggest a feature, ask questions, and find answers from others. We use your feedback to continue to improve Visual Studio 2019 for Mac, so thank you again on behalf of our entire team.

The post Getting Started with Blazor Server Apps in Visual Studio for Mac appeared first on Visual Studio Blog.


Announcing dual-screen preview SDKs and Microsoft 365 Developer Day

$
0
0

In November, we shared our vision for dual-screen devices and how this new device category will help people get more done on smaller and more mobile form factors. Today, we are excited to give you an update on how you can get started and optimize for dual-screen devices by:

  1. Exploring preview SDKs and standards proposals for apps and websites
  2. Embracing dual-screen experiences
  3. Learning more at Microsoft 365 Developer Day

1) Exploring preview SDKs and standards proposals for apps and websites

We are happy to announce the availability of the preview SDK for Microsoft Surface Duo, and availability in the coming weeks for the preview SDK for Windows 10. We are also excited to announce new web standards proposals to enable dual-screen experiences for websites and PWAs on both Android and Windows 10X. These new web standards proposals will provide you with the capabilities and tools you need for dual-screen devices.

Download the preview SDK for Microsoft Surface Duo

Today, developers can download the preview SDK for Surface Duo, access documentation and samples for best practices, see UX design patterns, and more. The preview SDK gives developers a first look at how you can take advantage of dual-screen experiences.

This includes:

  • Native Java APIs to support dual-screen development for the Surface Duo device, including the DisplayMask API, Hinge Angle Sensor, and new device capabilities.
  • An Android Emulator with a preview Surface Duo image that is integrated into Android Studio so you can test your app without a physical device. The emulator simulates postures, gestures, hinge angle, mimicking the seam between the two screens, and more. We’ll continue to add functionality over time.
  • Requirements: For the Android Studio and Android Emulator.

We will have more announcements and discussion in the coming months and look forward to hearing your feedback.

The Android Emulator with a preview Surface Duo image

Figure 1: The Android Emulator with a preview Surface Duo image

An early look at developing for Windows 10X

In the coming weeks, developers will have access to a pre-release version of the Windows SDK through the standard Insider builds. Our intent is to provide you with the Microsoft® Emulator on February 11th as well as new APIs for dual-screen support, documentation, and code samples.

This includes:

  • Native Windows APIs for dual-screen development to enable your app to span the two screens, detect the hinge position, and take advantage of Windows 10X.
  • Microsoft Emulator is a dual-screen Hyper-V emulator so you can deploy your existing Universal Windows Platform (UWP) and Win32 apps and test in both single-and dual-screen scenarios. The emulator simulates the physical device so you can see how your apps interact with Windows 10X.
  • Requirements: A recent Windows Insiders preview build of 64-bit Windows 10 (Pro, Enterprise, or Education), 64bit CPU with 4 cores, 8GB minimum (16GB of RAM recommended), Hyper-V enabled and dedicated GPU that supports Direct X 11.0 or later.

Figure 2: Microsoft Emulator showing Windows 10X.

Figure 2: Microsoft Emulator showing Windows 10X

Build dual-screen experiences on the web

The new Microsoft Edge released last week, provides a powerful and compatible foundation for website and web app experiences across devices, powered by Chromium. We are actively incubating new capabilities that enable web content to provide a great experience on dual-screen devices, whether it’s running in the browser or installed as an app.

  • New web standards for dual-screen layout: We are proposing CSS primitives for dual-screen layouts and a JavaScript Window Segments Enumeration API to provide web platform primitives for web developers to detect multiple displays and lay out content across them. We expect to provide an experimental implementation of these features in preview builds of the browser soon.
  • Dual-screen polyfills: As the above features progress through the web standards process, we’ve published polyfills that you can write against as you begin to explore dual-screen development. You can find the polyfills and associated documentation at:
  • Progressive Web Apps are supported out of the box in the new Microsoft Edge, which can be installed directly from the browser on Windows 10X and Android. PWAs will support the same dual-screen layout features and tools as the browser.

We’ll have more to share about building for dual-screen devices with web technologies over the coming months – watch the Microsoft Edge blog for more details.

2) Embracing dual-screen experiences

Dual-screen devices creates an opportunity for your apps to delight people in a new and innovative way. To help you get started, we are providing you with basic support checklists for touch and pen and drag and drop and initial app pattern ideas to ensure your apps work great on dual-screen devices.

Figure 3: Dual-screen app patterns.

Figure 3: Dual-screen app patterns

Your app by default will occupy a single screen, but users can span the app to cover both screens when the device is in a double-portrait or double-landscape layout. You can programmatically enable full-screen mode for your app at any time, but spanning is limited to user activity for now.

Figure 4: Dual-screen orientation and layout.

Figure 4: Dual-screen orientation and layout.

For those who are interested in native cross-platform development using React Native or Xamarin.Forms, we are working on improvements to those frameworks and code samples. You can find the all dual-screen checklists, app patterns, and new code samples as they become available on our dual-screen documentation site. Please reach out to us at dualscreendev@microsoft.com so we can work with you to idealize and innovate great dual-screen experiences together.

3) Learning more at Microsoft 365 Developer Day – Dual-Screen Experiences

Please join us online for the Microsoft 365 Developer Day, focused on dual-screen experiences on Tuesday, February 11th at 8:30 AM PDT. The keynote and sessions will show how to:

  • Get the most out of these SDKs and emulators
  • Use cross platform tools and languages
  • Design apps for dual-screen devices
  • Build dual-screen experiences on the web
  • Connect your apps with Microsoft 365

We hope that you will join us, and we are excited to see what dual-screen experiences you build.

The post Announcing dual-screen preview SDKs and Microsoft 365 Developer Day appeared first on Windows Developer Blog.

MSC Mediterranean Shipping Company on Azure Site Recovery

$
0
0

Today’s Q&A post covers an interview between Siddharth Deekshit, Program Manager, Microsoft Azure Site Recovery engineering and Quentin Drion, IT Director of Infrastructure and Operations, MSC. MSC is a global shipping and logistics business, our conversation focused on their organization’s journey with Azure Site Recovery (ASR). To learn more about achieving resilience in Azure, refer to this whitepaper.

I wanted to start by understanding the transformation journey that MSC is going through, including consolidating on Azure. Can you talk about how Azure is helping you run your business today?

We are a shipping line, so we move containers worldwide. Over the years, we have developed our own software to manage our core business. We have a different set of software for small, medium, and large entities, which were running on-premises. That meant we had to maintain a lot of on-premises resources to support all these business applications. A decision was taken a few years ago to consolidate all these business workloads inside Azure regardless of the size of the entity. When we are migrating, we turn off what we have on-premises and then start using software hosted in Azure and provide it as a service for our subsidiaries. This new design is managed in a centralized manner by an internal IT team.

That’s fantastic. Consolidation is a big benefit of using Azure. Apart from that, what other benefits do you see of moving to Azure?

For us, automation is a big one that is a huge improvement, the capabilities in terms of API in the integration and automation that we can have with Azure allows us to deploy environments in a matter of hours where before that it took much, much longer as we had to order the hardware, set it up, and then configure. Now we no longer need to worry about the set up as well as hardware support, and warranties. The environment is all virtualized and we can, of course, provide the same level of recovery point objective (RPO), recovery time objective (RTO), and security to all the entities that we have worldwide.

Speaking of RTO and RPO, let’s talk a little bit about Site Recovery. Can you tell me what life was like before using Site Recovery?

Actually, when we started migrating workloads, we had a much more traditional approach, in the sense that we were doing primary production workloads in one Azure region, and we were setting up and managing a complete disaster recovery infrastructure in another region. So the traditional on-premises data center approach was really how we started with disaster recovery (DR) on Azure, but then we spent the time to study what Site Recovery could provide us. Based on the findings and some testing that we performed, we decided to change the implementation that we had in place for two to three years and switch to Site Recovery, ultimately to reduce our cost significantly, since we no longer have to keep our DR Azure Virtual Machines running in another region. In terms of management, it's also easier for us. For traditional workloads, we have better RPO and RTO than we saw with our previous approach. So we’ve seen great benefits across the board.

That’s great to know. What were you most skeptical about when it came to using Site Recovery? You mentioned that your team ran tests, so what convinced you that Site Recovery was the right choice?

It was really based on the tests that we did. Earlier, we were doing a lot of manual work to switch to the DR region, to ensure that domain name system (DNS) settings and other networking settings were appropriate, so there were a lot of constraints. When we tested it compared to this manual way of doing things, Site Recovery worked like magic. The fact that our primary region could fail and that didn’t require us to do a lot was amazing. Our applications could start again in the DR region and we just had to manage the upper layer of the app to ensure that it started correctly. We were cautious about this app restart, not because of the Virtual Machine(s), because we were confident that Site Recovery would work, but because of our database engine. We were positively surprised to see how well Site Recovery works. All our teams were very happy about the solution and they are seeing the added value of moving to this kind of technology for them as operational teams, but also for us in management to be able to save money, because we reduced the number of Virtual Machines that we had that were actually not being used.

Can you talk to me a little bit about your onboarding experience with Site Recovery?

I think we had six or seven major in house developed applications in Azure at that time. We picked one of these applications as a candidate for testing. The test was successful. We then extended to a different set of applications that were in production. There were again no major issues. The only drawback we had was with some large disks. Initially, some of our larger disks were not supported. This was solved quickly and since then it has been, I would say, really straightforward. Based on the success of our testing, we worked to switch all the applications we have on the platform to use Site Recovery for disaster recovery.

Can you give me a sense of what workloads you are running on your Azure Virtual Machines today? How many people leverage the applications running on those Virtual Machines for their day job?

So it's really core business apps. There is, of course, the main infrastructure underneath, but what we serve is business applications that we have written internally, presented to Citrix frontend in Azure. These applications do container bookings, customer registrations, etc. I mean, we have different workloads associated with the complete process of shipping. In terms of users, we have some applications that are being used by more than 5,000 people, and more and more it’s becoming their primary day-to-day application.

Wow, that’s a ton of usage and I’m glad you trust Site Recovery for your DR needs. Can you tell me a little bit about the architecture of those workloads?

Most of them are Windows-based workloads. The software that gets the most used worldwide is a 3-tier application. We have a database on SQL, a middle-tier server, application server, and also some web frontend servers. But for the new one that we have developed now, it's based on microservices. There are also some Linux servers being used for specific usage.

Tell me more about your experience with Linux.

Site Recovery works like a charm with Linux workloads. We only had a few mistakes in the beginning, made on our side. We wanted to use a product from Red Hat called Satellite for updates, but we did not realize that we cannot change the way that the Virtual Machines are being managed if you want to use Satellite. It needs to be defined at the beginning otherwise it's too late. But besides this, the ‘bring your own license’ story works very well and especially with Site Recovery.

Glad to hear that you found it to be a seamless experience. Was there any other aspect of Site Recovery that impressed you, or that you think other organizations should know about?

For me, it's the capability to be able to perform drills in an easy way. With the more traditional approach, each time that you want to do a complete disaster recovery test, it's always time and resource-consuming in terms of preparation. With Site Recovery, we did a test a few weeks back on the complete environment and it was really easy to prepare. It was fast to do the switch to the recovery region, and just as easy to bring back the workload to the primary region. So, I mean for me today, it's really the ease of using Site Recovery.

If you had to do it all over again, what would you do differently on your Site Recovery Journey?

I would start to use it earlier. If we hadn’t gone with the traditional active-passive approach, I think we could have saved time and money for the company. On the other hand, we were in this way confident in the journey. Other than that, I think we wouldn’t have changed much. But what we want to do now, is start looking at Azure Site Recovery services to be able to replicate workloads running on on-premises Virtual Machines in Hyper-V. For those applications that are still not migrated to Azure, we want to at least ensure proper disaster recovery. We also want to replicate some VMware Virtual Machines that we still have as part of our migration journey to Hyper-V. This is what we are looking at.

Do you have any advice for folks for other prospective or current customers of Site Recovery?

One piece of advice that I could share is to suggest starting sooner and if required, smaller. Start using Site Recovery even if it's on one small app. It will help you see the added value, and that will help you convince the operational teams that there is a lot of value and that they can trust the services that Site Recovery is providing instead of trying to do everything on their own.

That’s excellent advice. Those were all my questions, Quentin. Thanks for sharing your experiences.

Learn more about resilience with Azure. 

Mediterranean Shipping Company on Azure Site Recovery

$
0
0

Today’s Q&A post covers an interview between Siddharth Deekshit, Program Manager, Microsoft Azure Site Recovery engineering and Quentin Drion, IT Director of Infrastructure and Operations, MSC. MSC is a global shipping and logistics business, our conversation focused on their organization’s journey with Azure Site Recovery (ASR). To learn more about achieving resilience in Azure, refer to this whitepaper.

I wanted to start by understanding the transformation journey that MSC is going through, including consolidating on Azure. Can you talk about how Azure is helping you run your business today?

We are a shipping line, so we move containers worldwide. Over the years, we have developed our own software to manage our core business. We have a different set of software for small, medium, and large entities, which were running on-premises. That meant we had to maintain a lot of on-premises resources to support all these business applications. A decision was taken a few years ago to consolidate all these business workloads inside Azure regardless of the size of the entity. When we are migrating, we turn off what we have on-premises and then start using software hosted in Azure and provide it as a service for our subsidiaries. This new design is managed in a centralized manner by an internal IT team.

That’s fantastic. Consolidation is a big benefit of using Azure. Apart from that, what other benefits do you see of moving to Azure?

For us, automation is a big one that is a huge improvement, the capabilities in terms of API in the integration and automation that we can have with Azure allows us to deploy environments in a matter of hours where before that it took much, much longer as we had to order the hardware, set it up, and then configure. Now we no longer need to worry about the set up as well as hardware support, and warranties. The environment is all virtualized and we can, of course, provide the same level of recovery point objective (RPO), recovery time objective (RTO), and security to all the entities that we have worldwide.

Speaking of RTO and RPO, let’s talk a little bit about Site Recovery. Can you tell me what life was like before using Site Recovery?

Actually, when we started migrating workloads, we had a much more traditional approach, in the sense that we were doing primary production workloads in one Azure region, and we were setting up and managing a complete disaster recovery infrastructure in another region. So the traditional on-premises data center approach was really how we started with disaster recovery (DR) on Azure, but then we spent the time to study what Site Recovery could provide us. Based on the findings and some testing that we performed, we decided to change the implementation that we had in place for two to three years and switch to Site Recovery, ultimately to reduce our cost significantly, since we no longer have to keep our DR Azure Virtual Machines running in another region. In terms of management, it's also easier for us. For traditional workloads, we have better RPO and RTO than we saw with our previous approach. So we’ve seen great benefits across the board.

That’s great to know. What were you most skeptical about when it came to using Site Recovery? You mentioned that your team ran tests, so what convinced you that Site Recovery was the right choice?

It was really based on the tests that we did. Earlier, we were doing a lot of manual work to switch to the DR region, to ensure that domain name system (DNS) settings and other networking settings were appropriate, so there were a lot of constraints. When we tested it compared to this manual way of doing things, Site Recovery worked like magic. The fact that our primary region could fail and that didn’t require us to do a lot was amazing. Our applications could start again in the DR region and we just had to manage the upper layer of the app to ensure that it started correctly. We were cautious about this app restart, not because of the Virtual Machine(s), because we were confident that Site Recovery would work, but because of our database engine. We were positively surprised to see how well Site Recovery works. All our teams were very happy about the solution and they are seeing the added value of moving to this kind of technology for them as operational teams, but also for us in management to be able to save money, because we reduced the number of Virtual Machines that we had that were actually not being used.

Can you talk to me a little bit about your onboarding experience with Site Recovery?

I think we had six or seven major in house developed applications in Azure at that time. We picked one of these applications as a candidate for testing. The test was successful. We then extended to a different set of applications that were in production. There were again no major issues. The only drawback we had was with some large disks. Initially, some of our larger disks were not supported. This was solved quickly and since then it has been, I would say, really straightforward. Based on the success of our testing, we worked to switch all the applications we have on the platform to use Site Recovery for disaster recovery.

Can you give me a sense of what workloads you are running on your Azure Virtual Machines today? How many people leverage the applications running on those Virtual Machines for their day job?

So it's really core business apps. There is, of course, the main infrastructure underneath, but what we serve is business applications that we have written internally, presented to Citrix frontend in Azure. These applications do container bookings, customer registrations, etc. I mean, we have different workloads associated with the complete process of shipping. In terms of users, we have some applications that are being used by more than 5,000 people, and more and more it’s becoming their primary day-to-day application.

Wow, that’s a ton of usage and I’m glad you trust Site Recovery for your DR needs. Can you tell me a little bit about the architecture of those workloads?

Most of them are Windows-based workloads. The software that gets the most used worldwide is a 3-tier application. We have a database on SQL, a middle-tier server, application server, and also some web frontend servers. But for the new one that we have developed now, it's based on microservices. There are also some Linux servers being used for specific usage.

Tell me more about your experience with Linux.

Site Recovery works like a charm with Linux workloads. We only had a few mistakes in the beginning, made on our side. We wanted to use a product from Red Hat called Satellite for updates, but we did not realize that we cannot change the way that the Virtual Machines are being managed if you want to use Satellite. It needs to be defined at the beginning otherwise it's too late. But besides this, the ‘bring your own license’ story works very well and especially with Site Recovery.

Glad to hear that you found it to be a seamless experience. Was there any other aspect of Site Recovery that impressed you, or that you think other organizations should know about?

For me, it's the capability to be able to perform drills in an easy way. With the more traditional approach, each time that you want to do a complete disaster recovery test, it's always time and resource-consuming in terms of preparation. With Site Recovery, we did a test a few weeks back on the complete environment and it was really easy to prepare. It was fast to do the switch to the recovery region, and just as easy to bring back the workload to the primary region. So, I mean for me today, it's really the ease of using Site Recovery.

If you had to do it all over again, what would you do differently on your Site Recovery Journey?

I would start to use it earlier. If we hadn’t gone with the traditional active-passive approach, I think we could have saved time and money for the company. On the other hand, we were in this way confident in the journey. Other than that, I think we wouldn’t have changed much. But what we want to do now, is start looking at Azure Site Recovery services to be able to replicate workloads running on on-premises Virtual Machines in Hyper-V. For those applications that are still not migrated to Azure, we want to at least ensure proper disaster recovery. We also want to replicate some VMware Virtual Machines that we still have as part of our migration journey to Hyper-V. This is what we are looking at.

Do you have any advice for folks for other prospective or current customers of Site Recovery?

One piece of advice that I could share is to suggest starting sooner and if required, smaller. Start using Site Recovery even if it's on one small app. It will help you see the added value, and that will help you convince the operational teams that there is a lot of value and that they can trust the services that Site Recovery is providing instead of trying to do everything on their own.

That’s excellent advice. Those were all my questions, Quentin. Thanks for sharing your experiences.

Learn more about resilience with Azure. 

Debug z-index stacking content with 3D View in the Microsoft Edge DevTools

$
0
0

We are thrilled to announce the next iteration of 3D View in the Microsoft Edge DevTools, with a new feature to help debug z-index stacking context. The general 3D View shows a representation of the DOM (Document Object Model) depth using color and stacking, and the z-Index view helps you isolate the different stacking contexts of your page.

3D view is enabled by default in the Canary branch – to enable it in other branches, open the DevTools “Experiments” settings (Ctrl-Shift-P -> “Experiments“) and turn on “Enable 3D View.” If you don’t see that item, navigate to edge://flags and make sure you have enabled “Developer Tools experiments.” Once 3D view is enabled, you can find it under the “More tools” menu (or via search: Ctrl-Shift-P -> “3D View“).

Screenshot showing the 3D View in the Microsoft Edge DevTools

With our first 3D View experiment, we were able to get incredible feedback from Twitter and from the feedback button. This encouraged us to conduct further usability studies to improve the tool. Along the way, we received plenty of requests for CSS z-index debugging as a feature, and felt that the 3D View would be a great vehicle to try it out.

In the z-index tab you can further simplify the view by only showing elements with a stacking context or hiding elements with the same paint order as their parent. These two settings will make for a flatter and more readable experience. Check out our explainer for more details!

Screenshot showing z-index debugging in the DevTools 3D View

What’s next

Coming soon, we’ll have a better highlighting experience between the Elements panel and 3D View, UI improvements, and new camera controls. We’d love to hear what else you’d like to see from this experience! What other features would help you with your day to day debugging? Feel free to reach out to us on Twitter, or just click “Send feedback” in the Microsoft Edge “Help and Feedback” menu at any time.

– Erica Draud, Program Manager, Edge DevTools

 

The post Debug z-index stacking content with 3D View in the Microsoft Edge DevTools appeared first on Microsoft Edge Blog.

My views on community, productivity, kindness, and mindfulness on the Hanselminutes Fresh Tech Podcast

$
0
0

Scott HanselmanAt the start of a new decade and over 700 episodes of my tech podcast, I did something weird. I had myself on the show. Egotistical, perhaps, given the show literally has my name in it, but the way it happened was interesting.

This episode wasn't supposed to be an episode! I was invited by Jeff Fritz of Twitch fame to talk to his community team of Live Coders on Discord. They recorded it, and mentioned several times that it was useful content! I didn't go into the private meeting thinking I'd record a show. It was effectively a conference call with friends old and new. It's unedited and off the cuff.

So, why not try something new and make this an episode! Let me know on Twitter if you find my views on community, productivity, and life useful to you!

I talk about:

  • Longevity - Sticking to your goals
  • Relationships - Business plans/goals/life settings/culture
  • Living Life By Design rather than By Default
  • Setting the Tone
  • Positivity and how to maintain it
  • Scaling yourself and your community
  • Why Kindness Matters
  • Blogging - it's a marathon not a sprint
  • Feeding your spirit
  • Why do we do something and why do we procrastinate?
  • Removing Mental Clutter
  • Why do I blog/create? Why do you?
  • Conserving your keystrokes
  • Advice to my 20 year old self
  • Willpower and catching up
  • What can you talk about? What can you write about?
  • A question is a gift
  • Why would I allow someone who doesn't love me ruin my day?
  • Interviewing techniques and empathy
  • The importance of improv and "yes, and"
  • Charisma On Command
  • Dealing with Imposter Syndrome
  • Deliberate Practice
  • Mindfulness
  • Owning what you're good at
  • Freaking Out
  • Acceptance
  • Priorities - family and life
  • What's important?
  • Plan, execute on the plan, make a new plan

Please go listen to Episode 719 of the Hanselminutes Podcast, it's just 54 minutes long.

Hanselminutes Podcast

It's called "Myself: It's not weird at all" and I'm actually kind of proud of it. Let me know what you think in the comments!

if you like this show, you can give ME a gift by SHARING it with your people!


Sponsor: Veracode analyzed 1.4 million scans for their 2019 SOSS X report. The findings? 83% of apps have flaws like cross-site scripting, injection, and authentication—all adding to rising security debt.



© 2019 Scott Hanselman. All rights reserved.
     
Viewing all 5971 articles
Browse latest View live