Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Azure Front Door Service is now generally available

$
0
0

Every internet facing web application, whether serving a large audience or a small set of users in a single region, is by default a global application.  Whether you are running a large news website with millions of users across the globe, running a B2B application for managing your sales channels or a local pastry shop in a city – your users are distributed/roaming across multiple locations, or your application demands deployment into multiple locations for high availability or disaster recovery scenarios.  As a global application, your distributed users and/or application deployments place demands on you to maximize performance for your end users and ensure the application is always-on across failures and attacks.

Today I am excited to announce the general availability of Azure Front Door Service (AFD) which we launched in preview last year – a scalable and secure entry point for fast delivery of your global applications. AFD is your one stop solution for your global website/application and provides:

  • Application and API acceleration with anycast and using Microsoft’s massive private global network to directly connect to your Azure deployed backends means your app runs with lower latency and higher throughput to your end users.
  • Global HTTP load balancing enables you to build your application resiliently across regions, fail-over instantly and offer your users an “always-on” web site availability experience either at a domain or microservice (URL path) level. 
  • SSL offload at a massive scale enables you to maintain security and scale to a rapidly growing or expanding user base, all while reducing latency.
  • WAF @ Edge offering application security against DDoS attacks or malicious users at the edge providing protection at scale without sacrificing on performance.

Built atop Microsoft’s massive global network, Azure Front Door currently supports Microsoft’s biggest web workloads deliver high quality, highly performant services. Global brands such as Bing, Office 365, Xbox Live, MSN, LinkedIn and Azure DevOps leverage AFD’s competitive performance, enterprise grade reliability and massive scalability to deliver consistent, low latency, high throughput user and application experiences. Today AFD provides global coverage in over 35 countries across 65 metros and quickly growing.

image

Figure 1: Azure Front Door’s global footprint and Microsoft's Network

Use case scenarios

Customers come to AFD today focused on their core business needs to improve performance, scale their application, enable instant failover, or enable complex application architectures like IaaS and PaaS, on-prem + cloud, or multi-cloud hybrid experiences.  AFD can be quickly and easily integrated into your application’s existing or new architecture and starts working out of the box.  Adding AFD in front of your application or API also enables your customers to gain the benefit of our constant improvements and optimizations to at the edge such as TCP Fast Open, WAN optimizations and improvements to SSL such as SSL session resumption.   This means your users get optimized connectivity experiences day 1 with Front Door.

Below is a sample reference architecture outlining how an application can be designed for improved page load times, SSL offload and API routing. AFD runs at the Edge of Microsoft’s global network, performing TCP and SSL terminations close to end user, thereby improving performance of client access to applications.  Traffic from AFD instances running at the Edge to application backends is routed on Microsoft’s private global network providing high reliability and optimized routing to the destination.

image“The TCP and TLS optimizations from Azure Front Door along with their global edge footprint is perfect for our high-volume services”
- Ravi Krishnaswamy, CTO

image“Azure Front Door Service allows us to manage our costs in a predictable way whilst ensuring performance for our end users”
- Colin Farrelly, DevOps SME

image

Figure 2: Sample architecture for accelerated and scalable web application

Another core Azure Front Door use case for building highly scalable apps is utilizing AFD’s smart load balancing algorithm to route traffic to the fastest available backend. Unlike the typical DNS-based load balancing systems, Azure Front Door delivers near instant failover across your application backends and with more granular control to even failover specific microservices. Our smart and efficient load balancing algorithms support both active-active as well as active-passive deployment configurations.

image

Figure 3: Sample architecture of an always-on web application

Azure Front Door Service is now generally available, providing a 99.99 percent availability SLA and a myriad of features including SSL offload, URL redirect and rewrite, HTTP/2, IPv6 support, session affinity, simple domain onboarding with free or custom SSL certs, caching, and much more. You can also read about the AFD WAF Preview which is also available now as well! Azure Front Door’s GA pricing goes into effect on May 1, 2019. Until then, you will continue to be billed based on the preview pricing.

Get started

Get started with the Azure Front Door Service today! To learn more about the service and the various features, refer to AFD documentation. If you are interested in exploring capabilities beyond the standard offering, simply file a feature request on our UserVoice page or feel free to contact us at afdfeedback@microsoft.com.


Little great things about Visual Studio 2019

$
0
0

A few days ago, we announced the general availability of Visual Studio 2019. But I’ve been using Visual Studio 2019 exclusively since the first internal build – long before the release of Preview 1 in December of 2018. During this time, there has been a lot of little features that have put a smile on my face and made me more productive.

I want to share a few of them with you since they are not all obvious and some require you to change some settings. Let’s dive in.

Clean solution load

When a solution is closed, its state is saved so that next time you open it, Visual Studio can restore the collapsed/expanded state of projects and folders in Solution Explorer and reopen the documents that were left open. That’s great but I prefer a clean slate when I open solutions – no files open and all the tree nodes collapsed in Solution Explorer.

I wrote the Clean Solution extension to provide this behavior in previous version of Visual Studio. This feature is now native to Visual Studio 2019 and can be enabled with two separate checkboxes. Go to search (Ctrl+Q) and type in “load” to find the Projects and Solutions > General options page.

Uncheck both the Reopen documents on solution load and Restore Solution Explorer project hierarchy on solution load checkboxes.

An added benefit from unchecking these two checkboxes is that solutions will load faster too, because of the eliminated overhead from restoring state. Win-win.

Git pull from shortcut

I do a lot of work with GitHub repos and I often take pull requests from people. That means I must make sure to do a git pull before I make any subsequent commits. But, as it turns out repeatedly, this is something I tend to forget. The result is that I end up with merge conflicts and other nuisances.

The only way to do git pull in the past was to either use Team Explorer, the command line, or an external tool. What I really wanted was a keyboard shortcut from within Visual Studio that did it for me.

Previously, Team Explorer’s pull command was not a command you could assign keyboard shortcuts to but now it is. Go to search (Ctrl+Q) and type “keyboard” to find the Environment > Keyboard options page. From there, find the Team.Git.Pull command from the list. Then assign any shortcut to it and hit the OK button. I chose to use Ctrl+Shift+P.

To automatically perform a git pull upon solution load, try out the free Git Pull extension.

Code Cleanup for C#

Keeping source code neatly formatted and ensuring coding styles are consistent is something I’ve never been good at. The new Code Cleanup feature is a huge help in keeping my code neat and tidy since I have configured it to run all the fixers by default.

To do that, go to the Code Cleanup menu sitting in the bottom margin of the editor window and click Configure Code Cleanup.

In the dialog, select all the fixers one by one from the bottom pane and hit the up-arrow button to move them up into the top. Then hit OK.

Now all fixers will run every time you perform a Code Cleanup. Simply hit Ctrl+K, Ctrl+E to execute. The result is a nicely formatted document with a bunch of coding style rules applied, such as added missing braces and modifiers. Voila!

IntelliCode

IntelliCode is a new feature that augments the IntelliSense completions based on the context you’re in using advanced machine learning algorithms. That proves useful for many scenarios including when you are exploring new interfaces or APIs. I write a lot of Visual Studio extensions and the API surface is so big that there are parts of it I have never used. When I’m exploring a new part of the Visual Studio API, I find it very helpful to have IntelliCode guide me through how to use it.

To enable this powerful feature, you can download IntelliCode from the Visual Studio Marketplace and install the extension.

IntelliCode works for C#, C++ and XAML.

See content of Clipboard Ring

Every time you copy (Ctrl+C) something in Visual Studio, it is being stored in the Clipboard Ring. Hitting Ctrl+Shift+V allows you to cycle through the items in the Clipboard ring and paste the item you select. I find it very useful to keep multiple things in the clipboard at once and then paste the various items to specific locations.

In Visual Studio 2019, the Clipboard Ring now shows a visual preview of its content when hitting Ctrl+Shift+V. That makes it easier than ever to navigate through the copy history and select the right item to paste.

New C# Refactorings

There are lots of new and highly useful refactorings in C# that I’ve come to depend on every single day. They show up as suggestions in the light bulb and include moving members to interface or base class, adjusting namespaces to match folder structure, convert foreach-loops to Linq queries, and a lot more.

To learn more about the new refactorings and other C# features in Visual Studio 2019, check out this post on the .NET blog.

Git Stash

Having the ability to stash away some work for future use is super helpful. Git Stash is what gives me that ability without having to create a new branch. If you’re familiar with TFS, you can think of Git Stash as a shelveset.

The best part is that I can manage all my stashes inside the Team Explorer window. They are easy to create and apply, and I’ve been using them a lot more after Visual Studio now natively supports them.

Try Visual Studio 2019

These were just a few of the many small improvements found throughout Visual Studio 2019 that I find particularly useful. Please share any tips or improvements you’ve found helpful in the comments below!

The post Little great things about Visual Studio 2019 appeared first on The Visual Studio Blog.

Device template library in IoT Central

$
0
0

With the new addition of a device template library into our Device Templates page, we are making it easier than ever to onboard and model your devices. Now, when you get started with creating a new template, you can choose between building one from scratch or you can quickly select from a library of existing device templates. Today you’ll be able to choose from our MXChip, Raspberry Pi, or Windows 10 IoT Core templates. We will be working to improve this library by adding more device templates which provide customer value.

The addition of the device template library helps to streamline the device modeling workflow. It saves time as you can pre-populate a model with existing details. This now opens the door for more manufacturers to create standard definitions for their devices or smart products which we’ll continue to include in this growing template library.

To get started with selecting a device template, select the Device Templates tab and click the “+ New” button. This will bring you to our library page where you can choose which template you’d like to get quickly started with. You can also choose the Custom option if you would like to begin modeling your device template from scratch.

devicetemplibrary2

Once you select a template, simply give it a name and click “Create” to add this template into your application. We will automatically create a simulated device for you to view simulated data coming into this new template. Once your template has been created, you can visit the “Device Explorer” page to connect other real or simulated devices into this template.

We are excited to continue simplifying your device onboarding experience. If there are particular device templates you want to use or if you have any other suggestions, please leave us feedback with the links below.

Next steps

  • Have ideas or suggestions for new features? Post it on UserVoice.
  • To explore the full set of features and capabilities and start your free trial, visit the IoT Central website.
  • Check out our documentation including tutorials to connect your first device.
  • To give us feedback about your experience with Azure IoT Central, take this survey.
  • To learn more about the Azure IoT portfolio including the latest news, visit the Microsoft Azure IoT page.

Spinnaker continuous delivery platform now with support for Azure

$
0
0

Spinnaker is an open source, multi-cloud continuous delivery platform for releasing software changes with high velocity and confidence. It is being chosen by a growing number of enterprises as the open source continuous deployment platform used to modernize their application deployments. Most of these enterprises deploy applications to multiple clouds. One of Spinnaker’s features is its ability to allow users to deploy applications to different clouds using best practices and proven deployment strategies.

Until now customers who had standardized on Spinnaker had to use custom/different tooling to deploy their applications to Azure.

With this blog post and the recent release of Spinnaker (1.13), we are excited to announce that Microsoft has worked with the core Spinnaker team to ensure Azure deployments are integrated into Spinnaker!

These integrations will strengthen our existing open source CI/CD pipeline toolchain and allow customers who have taken a dependency on Spinnaker.

Spinnaker integration

Initial release (1.13)

 

In our initial release we have enabled a core Spinnaker scenario for deploying immutable VM images – the Build, Bake, Deploy scenario.

As the scenario name suggests, there are three primary stages in the Spinnaker pipeline.

  • Build (labeled “Configuration” above): The build stage happens outside of Spinnaker and is used as a trigger for the following stages. It can be a Jenkins job, Travis job, or Webhook, and generates a package that will be used to create a VM image.
  • Bake: This stage uses the package from the previous step to create an Azure managed VM image.
  • Deploy: Finally, the deploy stage deploys one or more Virtual Machine Scales Sets using the managed VM image from the previous step. This can be done using one of the built-in strategies like Highlander or Red/Black.

Since Spinnaker is used to deploy to multiple clouds, it has created some abstractions for common infrastructure components. In this release these abstractions map to Azure infrastructure as follows:

What’s next?

We are excited to be accepted as part of the Spinnaker open source community and will continue to invest in Spinnaker to enable other scenarios like container-based Azure Kubernetes Service (AKS) deployments, improve performance, and flexibility in infrastructure abstractions. We will publish our roadmap so keep an eye out and let us know what you think.

If you are interested in learning more about Spinnaker, or it’s already an important component in your DevOps and you would like to help us make the integration with Azure great, please reach out to us. You can connect directly with us in any of the following venues:

  • Join the conversation on the Azure channel in Spinnaker Slack.
  • Create issues and/or contribute on GitHub.

Visual Studio Code March 2019

Visual Studio Code C/C++ extension: March 2019 Update

$
0
0

The March 2019 update of the Visual Studio Code C/C++ extension is now available. This release includes many new features and bug fixes, including IntelliSense caching, Build and Debug Active File, and configuration squiggles. For a full list of this release’s improvements, check out our release notes on GitHub.

IntelliSense Cache (AutoPCH)

The extension will now cache header information to improve IntelliSense speed. This is similar to precompiled header files in Visual Studio. Please note that IntelliSense caching works on Linux, macOS 10.13+ (High Sierra and later versions), and Windows.

Precompiled Headers (PCH)

Precompiled headers (PCH) can be used with compilers  to speed up build times by taking the #include header files in your code and compiling them for reuse later. Without precompiling headers, IntelliSense needs to process the header files and source code. But, if your header files do not change, it is not actually necessary to process the header files again.

With AuotPCH, the IntelliSense compiler will generate a separate PCH for the use of the IntelliSense compiler. Caching of  PCH files reduces the parsing time of #include header files to improve IntelliSense performance.

IntelliSense Cache Settings

By default, the cached PCH files are currently stored in your workspace folder’s “.vscode” folder under the subdirectory ‘ipch’.  You change this location via the “C_Cpp.intelliSenseCachePath” setting. You can also control how much disk space can be used for the cache with the “C_Cpp.intelliSenseCacheSize” setting. The cached PCH files can be large, depending on the size and number of #include header files. The default cache size (for all files under the C_Cpp.intelliSenseCachePath) is 5120 MB. IntelliSense caching can be disabled by setting the cache size to 0.

Build and Debug Active File

To simplify the build and debug  experience with the C/C++ extension, we added a command to help you generate build and debug tasks for single code files. We write out to your tasks.json and launch.json when you press F5 or select the command from the context menu , which automatically configures the tasks and kicks off a build and debug session . So, you no longer need to go through many of the configuration steps previously required to build and debug your active   Here’s an example with a simple project :

developer command prompt launches VS Code workspace folder. In VS Code open a file and right click in the editor window to see the "build and debug active file" menu option. Selecting the menu option starts a debugger session on the active file.

Please Note: Since this feature writes out tasks.json and launch.json files, it currently requires that a workspace folder be opened first. Using this command will remove comments from the tasks.json and launch.json files. If you are on Windows and wish to use the cl.exe compiler through Visual Studio, you need to open the workspace folder and launch Visual Studio Code from the Developer Command Prompt using the “code .” command.

Configuration Squiggles

On our path to improving the configuration experience, we added additional validation to the c_cpp_properties.json file to assist in diagnosing potential configuration mistakes. These validation checks can be seen with error squiggles . The error squiggles are shown for invalid paths for properties like includePath and compilerPath. We also show error squiggles when a folder is used instead of a file or vice versa. The detected issues will also show up as “Problems” in the problems window:

Changing compiler path to "somePath" generates an error quiggle in the intellisense for the c_cpp_properties.json config file

We will continue improving the configuration experience in future releases.

Tell Us What You Think

Download the C/C++ extension for Visual Studio Code, give it a try, and let us know what you think. If you run into any issues, or have any suggestions, please report them on the  section of our GitHub repository. Set the C_CppProperties.UpdateChannel in your Visual Studio Code settings to “Insiders” to get early builds of our extension. to get early builds of our extension.

Please also take our quick survey to help us shape this extension to meet your needs. We can be reached via the comments below or via email (visualcpp@microsoft.com). You can also find our team on Twitter (@VisualC).

The post Visual Studio Code C/C++ extension: March 2019 Update appeared first on C++ Team Blog.

Edit and Delete Discussion Comments on the Work Item

$
0
0

With the Azure DevOps Sprint 149 Update, you’ll now be able to edit and delete your comments in a work item’s discussion in Azure Boards. This is a highly voted Developer Community Feature, so I wanted to show you how it works.

In Azure Boards, the work item form can be accessed from the work items hub, boards, backlogs, and queries. To submit a comment, you can simply enter some text and then press the “Save” button on the top right corner of the page.

As a part of this update, we’ve also updated the UI of our discussion experience to make it cleaner and easier to read. We’ve added bubbles around comments to make it clearer where individuals comments start and end, as well as increased the size of the avatars to emphasize the discussion participants.

The experience

Simply hover over any comment that you own, and you wil­­l see two new buttons appear. The first is the pencil icon you can click to edit your comment, and the other is the overflow icon where you will find the delete comment functionality.

Edit your comment

If you click the pencil icon, you will enter in edit mode. Simply make your edits and press the “Update” button to save your edits.

Once you submit your changes, you will see an “(edited)” watermark next to your comment’s timestamp to indicate that an edit has been made.

Delete your comment

When you click the overflow menu, you will see the option to delete your comment. Once you click this, you will be prompted again to confirm that you want to delete this comment.

History

You will have a full trace of all the edited and deleted comments in the history tab on the work item form, as auditability is extremely important for many customers.

Feedback

We’re excited for you to try this new feature, and want to hear your feedback in developer community! If you have any thoughts on this new functionality, you can also reach out to me directly on Twitter at @jessiesomekh22.

The post Edit and Delete Discussion Comments on the Work Item appeared first on Azure DevOps Blog.

Azure DevOps Now Available in the UK

$
0
0

At the Microsoft Reactor in London this morning, Donovan Brown announced that customers can now create Azure DevOps organizations and choose that their data will be stored in the UK Azure geography.

Creating a UK hosted Azure DevOps organization

This adds to the existing data locations available which includes:

  • Australia
  • Brazil
  • Canada
  • East Asia
  • Europe
  • India
  • United States

All customer data such as source code, work items, and test results, as well as the geo-redundant mirrors and offsite backups, are maintained within the selected geography when possible. But for more information on the data locations available and what data is stored in the local geography see the Microsoft Trust Center.

New customers can create an Azure DevOps organization in the UK Azure geography today by selecting it in the drop-down, existing Azure DevOps customers can contact Microsoft support or use our support bot to request a move of their Azure DevOps organization from another geography into the UK if they wish.

The post Azure DevOps Now Available in the UK appeared first on Azure DevOps Blog.


Analytics For Azure DevOps Services is Now Generally Available

$
0
0

Reporting has been an important capability for Azure DevOps customers who rely on Analytics to make data driven decisions.

Today, we’re excited to announce that the following Analytics features listed below will be included in our Azure DevOps Services offering at no additional cost. Customers will start to see these changes rolled out to their accounts soon.

Analytics Features Generally Available For Azure DevOps Services

  • Analytics Widgets – configurable modules that display data on a dashboard and help you monitor the progress of your work.

  • In Product Experiences – Analytics powered experiences within Azure DevOps and outside a dashboard that surface data and insights.

    • Top Failing Test Report – get insights about top failing tests in your pipeline to improve pipeline reliability and reduce test debt.

We will continue to offer Power BI Integration through Analytics Views and direct access to our OData Endpoint in preview for all Azure DevOps Services customers. Look for more information about the pricing model for Power BI integration and OData by June 2019.

Current Azure DevOps Services customers who have the Analytics marketplace extension installed can continue to use Analytics as they did before and do not need to follow any additional steps to get Analytics. As such, we will be deprecating the Analytics marketplace extension for hosted customers.

Azure DevOps Server 2019

For Azure DevOps Server, Analytics will remain in preview as an installable extension on the local marketplace and will become generally available in the next major release.

The Azure DevOps Analytics offering is the future of reporting and we will continue to invest in new features driven by Analytics. To learn more about Analytics and the experiences it currently enables:

The post Analytics For Azure DevOps Services is Now Generally Available appeared first on Azure DevOps Blog.

Web and Azure Tool Updates in Visual Studio 2019

$
0
0

Hopefully by now you’ve seen that Visual Studio 2019 is now generally available. As you would expect, we’ve added improvements for web and Azure development. As a starting point, Visual Studio 2019 comes with a new experience for getting started with your code and we updated the experience for creating ASP.NET and ASP.NET Core projects to match:

If you are publishing your application to Azure, you can now configure Azure App Service to use Azure Storage and Azure SQL Database instances, right from the publish profile summary page, without leaving Visual Studio. This means that for any existing web application running in App Service, you can add SQL and Storage, it is no longer limited to creation time only.

By clicking the “Add” button you get to select between Azure Storage and Azure SQL Database (more Azure services to be supported in the future):

and then you get to choose between using an existing instance of Azure Storage that you provisioned in the past or provisioning a new one right then and there:

When you configure your Azure App Service through the publish profile as demonstrated above, Visual Studio will update the Azure App Service application settings to include the connection strings you have configured (e.g. in this case azgist). It will also apply hidden tags to the instances in Azure about how they are configured to work together so that this information is not lost and can be re-discovered later by other instances of Visual Studio.

For a 30 minute overview of developing with Azure in Visual Studio, check out the session we gave as part of the launch:

Send us your feedback

As always, we welcome your feedback. Tell us what you like and what you don’t like, tell us which features you are missing and which parts of the workflow work or don’t work for you. You can do this by submitting issues to Developer Community or contacting us via Twitter.

The post Web and Azure Tool Updates in Visual Studio 2019 appeared first on ASP.NET Blog.

Announcing ML.NET 1.0 RC – Machine Learning for .NET

$
0
0

alt text

ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS) for .NET developers. Using ML.NET, developers can leverage their existing tools and skillsets to develop and infuse custom AI into their applications by creating custom machine learning models for common scenarios like Sentiment Analysis, Recommendation, Image Classification and more!.

Today we’re announcing the ML.NET 1.0 RC (Release Candidate) (version 1.0.0-preview) which is the last preview release before releasing the final ML.NET 1.0 RTM in 2019 Q2 calendar year.

Soon we will be ending the first main milestone of a great journey in the open that started on May 2018 when releasing ML.NET 0.1 as open source. Since then we’ve been releasing monthly, 12 preview releases so far, as shown in the roadmap below:

In this release (ML.NET 1.0 RC) we have initially concluded our main API changes. For the next sprint we are focusing on improving documentation and samples and addressing major critical issues if needed.

The goal is to avoid any new breaking changes moving forward.

Updates in ML.NET 1.0 RC timeframe

  • Segregation of stable vs. preview version of ML.NET packages: Heading ML.NET 1.0, most of the functionality in ML.NET (around 95%) is going to be released as stable (version 1.0).

    You can review the reference list of the ‘stable’ packages and classes here.

    However, there are a few feature-areas which still won’t be in RTM state when releasing ML.NET 1.0. Those features still kept as preview are being categorized as preview packages with the version 0.12.0-preview.

    The main packages that will continue in preview state after ML.NET 1.0 is released are the following (0.12 version packages):

    • TensorFlow components
    • Onnx components
    • TimeSeries components
    • Recommendadtions components

    You can review the full reference list of “after 1.0” preview packages and classes (0.12.0-preview) here.

  • IDataView moved to Microsoft.ML namespace : One change in this release is that we have moved IDataView back into Microsoft.ML namespace based on feedback that we received.

  • TensorFlow-support fixes: TensorFlow is an open source machine learning framework used for deep learning scenarios (such as computer vision and natural language processing). ML.NET has support for using TensorFlow models, but in ML.NET version 0.11 there were a few issues that have been fixed for the 1.0 RC release.

    You can review an example of ML.NET code running a TensorFlow model here.

  • Release Notes for ML.NET 1.0 RC: You can check out additional release notes for 1.0 RC here.

Breaking changes in ML.NET 1.0 Release Candidate

For your convenience, if you are moving your code from ML.NET v0.11 to v0.12, you can check out the breaking changes list that impacted our samples.

Planning to go to production?

alt text

If you are using ML.NET in your app and looking to go into production, you can talk to an engineer on the ML.NET team to:

  • Get help implementing ML.NET successfully in your application.
  • Provide feedback about ML.NET.
  • Demo your app and potentially have it featured on the ML.NET homepage, .NET Blog, or other Microsoft channel.

Fill out this form and leave your contact information at the end if you’d like someone from the ML.NET team to contact you.

Get ready for ML.NET 1.0 before it releases!

alt text

As mentioned, ML.NET 1.0 is almost here! You can get ready before it releases by researching the following resources:

Get started with ML.NET here.

Next, going further explore some other resources:

We will appreciate your feedback by filing issues with any suggestions or enhancements in the ML.NET GitHub repo to help us shape ML.NET and make .NET a great platform of choice for Machine Learning.

Thanks and happy coding with ML.NET!

The ML.NET Team.

This blog was authored by Cesar de la Torre plus additional contributions of the ML.NET team

The post Announcing ML.NET 1.0 RC – Machine Learning for .NET appeared first on .NET Blog.

Top Stories from the Microsoft DevOps Community – 2019.04.05

$
0
0

The big news this week is the launch of Visual Studio 2019. If you weren’t able to watch the video keynote live, don’t worry. It was all recorded for you so you can watch it on-demand. And don’t forget my favorite part: all the projects that you can build with Visual Studio 2019? You can also set up continuous integration builds for them with Azure Pipelines.

Of course, that’s not the only news this week. Here’s what’s happening in the community:

Pure Containerized Deploy with Terraform on Azure DevOps
I’m getting more and more excited about containerized deployments. It just makes so many of the hard parts of deployment easier that it’s becoming my go-to when setting up a new pipeline. Jason Farrell shows how to integrate Azure Pipelines and Terraform to create and then deploy a container with Infrastructure as Code (IaC).

Azure DevOps Podcast: Ted Neward on the ‘Ops’ Side of DevOps
DevOps isn’t just about pipelines or automation; it’s about delivering value to our customers. And one of the most important ways we do that is to ensure that we effectively operate the software that we develop. Ted Neward has some great insights into how operations works within the DevOps movement, how development and operations teams should work together, and where the industry is headed.

Build and Deploy Asp.Net App with Azure DevOps
It’s pretty easy to set up a build and release pipeline for a greenfield ASP.net application. But what about existing applications that you don’t want to refactor? What if you have configuration checked in to version control? What if you have a web.config with variables that you need to transform? Ricci Gian Maria introduces some techniques for managing existing applications.

The DevOps Lab: Using GitHub Actions to Deploy to Azure
GitHub actions are pretty interesting; they let you bring a container to help automate parts of your GitHub workflow. You can use them to update Azure Boards work items or start an Azure Pipelines build or deployment. But what if you have a simple static website to deploy? Gopi Chigakkagari shows how you might use GitHub Actions to deploy to right to Azure.

As always, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure then let me know! I’m @ethomson on Twitter.

The post Top Stories from the Microsoft DevOps Community – 2019.04.05 appeared first on Azure DevOps Blog.

Expanding Azure IoT certification service to support Azure IoT Edge devices

$
0
0

In December 2018, Microsoft launched the Azure IoT certification service, a web-based test automation workflow to streamline the certification process through self-serve tools. Azure IoT certification service (AICS) was designed to reduce the operational processes and engineering costs for hardware manufacturers to get their devices certified for Azure Certified for IoT program and be showcased on the Azure IoT device catalog.

The initial version of AICS focused on IoT device certification. Today, we are taking steps to expand the service to now also support Azure IoT Edge Device certification. Azure IoT Edge device is a device which comprised of three key components: IoT Edge modules, IoT Edge runtime and a cloud-based interface. Learn more about these three components in this blog explaining IoT Edge.

What it means to certify as Azure IoT Edge device is that the certification program validates the functionality of three key components described above. The certification program also ensures that the identity of a device is protected through validation of security components. You can review specific technical requirements for Azure IoT Edge device certification.

This expansion of AICS capabilities builds on the related expansion of the Azure Certified for IoT program to support Azure IoT Edge devices which was announced in June 2018. Since then, certified Azure IoT Edge device ecosystem has grown rapidly with additional operating system support such as Windows IoT and Edge module ecosystem which allows any partners to build containerized app to deploy modules to a range of Azure IoT Edge devices. You can also see all the certified Azure Edge IoT devices here.

With the web-based test workflow now updated to also certify Edge devices, AICS not only helps improve the overall quality of IoT deployments but also simplifies the certification processes for device manufacturers. From now on, all device manufacturers are required to run AICS to complete the certification process. To learn more about AICS and see a demo of it in action, please refer to this episode of the IoT Show on Channel 9.

Ecosystem partners have endorsed this strategy and approach as well.  One partner who recently used the tool provided this comment:

Azure IoT certification service” (AICS) simplifies the validation process for Azure IoT Edge device certification and increase our quality with consistency for Azure IoT Edge devices.

–Tomoyasu Suzuki, President of Plat'Home Co., Ltd

Azure IoT Edge flow within AICS

The workflow and user experience for Azure IoT Edge device is similar to the IoT device certification workflow. You will need to select the device’s OS, prepare your device to register to specified IoT Hub instances, and then start running. Step by step instructions are provided in this certification documentation.

There are three key differences for Azure IoT Edge from IoT device certification flow:

  1. “Edge certified” checkbox needs to be checked to invoke the AICS workflow for Azure IoT Edge

      To start AICS for Azure IoT Edge devices, first you need to ensure that you select “Edge Certified” checkbox under Azure IoT Edge section in the first page of device registration process when submitting a device for certification.

      • Automated tests are different. AICS workflow for IoT devices validate for IoT Hub primitives like device-to-cloud, cloud-to-device, direct method and device twin properties. AICS workflow for IoT Edge device validate the presence of EdgeAgent module on the device, and also test to ensure that a sample Edge module is successfully deployed to the device.

          To learn more about this process please see our blog on streamlined IoT device certification.

          • Upon submission, AICS will inform the Microsoft team to follow up with you and provide guidance to package and ship the physical device to the Microsoft team. This step is not necessary for IoT device certification process. The confirmation dialog is shown below.

          Confirmation Dialog Box from AICS
          AICS makes the certification process easy and intuitive. We hope every device manufacturers to submit your devices for certification.

          Next steps

          Go to Partner Dashboard to start your submission.

          If you have any questions, please contact Azure Certified for IoT.

          Hybrid storage performance comes to Azure

          $
          0
          0

          When it comes to adding a performance tier between compute and file storage, Avere Systems has led the way with its high-performance caching appliance known as the Avere FXT Edge Filer. This week at NAB, attendees will get a first look at the new Azure FXT Edge Filer, now with even more performance, memory, SSD, and support for Azure Blob. Since Microsoft’s acquisition of Avere last March, we’ve been working to provide an exciting combination of performance and efficiency to support hybrid storage architectures with the Avere appliance technology.

          Linux performance over NFS

          Microsoft is committed to meeting our customers where we’re needed. The launch of the new Azure FXT Edge Filer is yet another example of this as we deliver high-throughput and low-latency NFS to applications running on Linux compute farms. The Azure FXT Edge Filer solves latency issues between Blob storage and on-premises computing with built-in translation from NFS to Blob. It sits at the edge of your hybrid storage environment closest to on-premises compute, caching the active data to reduce bottlenecks. Let’s look at common applications:

          • Active Archives in Azure Blob – When Azure Blob is a target storage location for aging, but not yet cold data, the Azure FXT Edge Filer accelerates access to files by creating an on-premises cache of active data.

          Active Archives in Azure Blob

          • WAN Caching – Latency across wide area networks (WANs) can slow productivity. The Azure FXT Edge Filer caches active data closest to the users and hides that latency as they reach for data stored in data centers or colos. Remote office engineers, artists, and other power users achieve fast access to files they need, and meanwhile backup, mirroring, and other data protection activities run seamlessly in the core data center.

          WAN Caching

          • NAS Optimization – Many high-performance computing environments have large NetApp or Dell EMC Isilon network-attached storage (NAS) arrays. When demand is at its peak, these storage systems can become bottlenecks. The Azure FXT Edge Filer optimizes these NAS systems by caching data closest to the compute, separating performance from capacity and better delivering both.

          NAS Optimization

          When datasets are large, hybrid file-storage caching provides performance and flexibility that are needed to keep core operations productive.

          Azure FXT Edge Filer model specifications

          We are currently previewing the FXT 6600 model at customer sites, with a second FXT 6400 model becoming available with general availability. The FXT 6600 is an impressive top-end model with 40 percent more read performance and double the memory of the FXT 5850. The FXT 6400 is a great mid-range model for customers who don’t need as much memory and SSD capacity or are looking to upgrade FXT 5600 and FXT 5400 models at an affordable price.

          Azure FXT Edge Filer

          Azure FXT Edge Filer – 6600 Model Azure FXT Edge Filer – 6400 Model
          Highest performance, largest cache High-performance, large cache
          Specifications per node: Specifications per note:
          1536 GB DRAM 768 GB DRAM
          25.6 TB SSD 12.8 TB SSD
          6x25/10Gb + 2x1Gb Network Ports 6x25/10Gb + 2x1Gb Network Ports
          Minimum 3-node cluster Minimum 3-node cluster
          Uses 256 AES encryption Uses 256 AES encryption

          Key features

          • Scalable to 24 FXT server nodes as demand grows
          • High-performance DRAM/memory for faster access to active data and large SSD cache sizes to support big data workloads
          • Single mountpoint provides simplified management across heterogeneous storage
          • Hybrid architecture – NFSv3, SMB2 to clients and applications; support for NetApp, Dell EMC Isilon, Azure Blob, and S3 storage

          The Azure FXT Edge Filer is a combination of hardware provided by Dell EMC and software provided by Microsoft. For ease, a complete solution will be delivered to customers as a software-plus-hardware appliance through a system integrator. If you are interested in learning more about adding the Azure FXT Edge Filer to your on-premises infrastructure or about upgrading existing Avere hardware, you can reach out to the team now. Otherwise, watch for update on the Azure FXT Edge Filer homepage

          Azure FXT Edge Filer for render farms

          High-performance file access for render farms and artists is key to meeting important deadlines and building efficiencies into post-production pipelines. At NAB 2019 in Las Vegas, visit the Microsoft Azure booth #SL6716 to learn more about the new Azure FXT Edge Filer for rendering. You’ll find technology experts, presentations, and support materials to help you render faster with Azure.

          Resources

          Leveraging AI and digital twins to transform manufacturing with Sight Machine

          $
          0
          0

          In the world of manufacturing, the Industrial Internet of Things (IIoT) has come, and that means data. A lot of data. Smart machines, equipped with sensors, add to the large quantity of data already generated from quality systems, MES, ERP and other production systems. All this data is being gathered in different formats and at different cadences making it nearly impossible to use—or to deliver business insights. Azure has mastered ingesting and storing manufacturing data with services such as Azure IoT Hub and Azure Data Lake, and now our partner Sight Machine has solved for the other huge challenge: data variety. Sight Machine on Azure is a leading AI-enabled analytics platform that enables manufacturers to normalize and contextualize plant floor data in real-time. The creation of these digital twins allows them to find new insights, transform operations, and unlock new value.

          Data in the pre-digital world

          Manufacturers are aware of the untapped potential of production data. Global manufacturers have begun investing in on-premises solutions for capturing and storing factory floor data. But these pre-digital world methods have many disadvantages. They result in siloed data, uncontextualized data (raw machine data with no connection to actual production processes), and limited accessibility (engineers and specialists are required to access and manipulate the data). Most importantly, this data is only accessed in a reactive manner: it does not reflect real-time conditions. It can’t be used to address quality and productivity issues as they occur, or to predict conditions that might impact output.

          Cloud-based manufacturing intelligence

          Sight Machine’s Digital Manufacturing Platform—built on Azure—harnesses artificial intelligence, machine learning and advanced analytics. It can continuously ingest and transform enormous quantities of production data into actionable insight; such as identifying vulnerabilities in quality and productivity throughout the enterprise. The approach is illustrated in this graphic.

          Infographic showcasing process for data ingestion, storage and modeling, and analysis in Manufacturing IIoT visibility and analytics

          Sight Machine’s platform leverages the IoT capabilities of Azure to ingest data from machines (PLCs and machine data). Azure IoT Hub and Azure Stream Analytics process the data in real-time and store it in Azure Blob Storage. Sight Machine’s AI Data Pipeline dynamically integrates this data with other production sources. These sources can include ERP data from Dynamics AX, analyses generated by Azure’s Machine Learning Service and HDInsight stored in Azure’s Data Lake. By combining all this data, Sight Machine creates a digital twin of the entire production process. Their analytics and visualization tools leverage this digital twin to deliver real-time information to the user. Integration with Azure Active Directory ensures the right engineers can access the right data and analysis tools.

          Digital twins = one source of truth

          Somewhat contrary to the notion of “twins,” digital twins result in one source of truth—at least in the world of data. The idea is simple: take data from disparate sources and locations—then combine the information in the cloud into digital representations of every machine, line, part, and process. Once a digital twin has been created, it can be stored, managed, analyzed, and presented.

          Sight Machine creates digital twins that represent every manufacturing machine, line, facility, supplier, part, batch, and process. Sight Machine’s AI Data Pipeline automates the process of blending and transforming streaming data into fundamental units of analysis, purpose-built for manufacturing. This approach combines edge compute, cloud automation, and management with AI. The benefits include classifying, mapping, data transformation, and unified data models that are configurable for every manufacturing environment.

          Recommended next steps

          To learn more about the company, go to the Sight Machine website. To try out the service, go to the Azure Marketplace listing and click Contact me.


          Introducing the App Service Migration Assistant for ASP.NET applications

          $
          0
          0

          This blog post was co-authored by Nitasha Verma, Principal Group Enginnering Manager, Azure App Service.

          In June 2018, we released the App Service Migration Assessment Tool. The Assessment Tool was designed to help customers quickly and easily assess whether a site could be moved to Azure App Service by scanning an externally accessible (HTTP) endpoint. Today we’re pleased to announce the release of an updated version, the App Service Migration Assistant! The new version helps customers and partners move sites identified by the assessment tool by quickly and easily migrating ASP.Net sites to App Service. 

          The App Service Migration Assistant is designed to simplify your journey to the cloud through a free, simple, and fast solution to migrate ASP.Net applications from on-premises to the cloud. You can quickly:

          • Assess whether your app is a good candidate for migration by running a scan of its public URL.
          • Download the Migration Assistant to begin your migration.
          • Use the tool to run readiness checks and general assessment of your app’s configuration settings, then migrate your app or site to Azure App Service via the tool.

          Keep reading to learn more about the tool or start your migration now.​

          App Service Migration Tool landing page

          Getting started

          Download the App Service Migration Assistant. This tool works with ASP.Net version 7.0 and above and will migrate site content and configuration to your Azure App Service subscription using either a new or existing App Service Plan.

          How the tool works

          The Migration Assistant tool is a local agent that performs a detailed assessment and then walks you through the migration process. The tool performs readiness checks as well as a general assessment of the web app’s configuration settings.

          Sample Assessment Report for website.

          Once the application has received a successful assessment, the tool will walk you through the process of authenticating with your Azure subscription and then prompt you to provide details on the target account and App Service plan along with other configuration details for the newly migrated site.

          Azure options

          The Migration Assistant tool will then move your site to the target App Service plan while also configuring Hybrid Connections, should that option be selected.

          Database migration and Hybrid Connections

          Our Migration Assistant is designed to migrate the web application and associated configurations, but it does not migrate the database. There are two options for your database:

          1. Use the SQL Migration Tool
          2. Leave your database on-premises and connect to it from the cloud using Hybrid Connections

          When used with App Service, Hybrid Connections allows you to securely access application resources in other networks – in this case an on-premises SQL database. The migration tool configures and sets up Hybrid Connections for you, allowing you to migrate your site while keeping your database on-premises to be migrated at your leisure.

          Supported configurations

          The tool should migrate most modern ASP.Net applications, but there are some configurations that are not supported. These include:

          • IIS version less than 7.0
          • Dependence on ISAPI filters
          • Dependence on ISAPI extensions
          • Bindings that are not HTTP or HTTPS
          • Endpoints that are not port 80 for HTTP, or port 443 for HTTPS
          • Authentication schemes other than anonymous
          • Dependencies on applicationhost.config settings made with a location tag
          • Applications that use more than one application pool
          • Use of an application pool that uses a custom account
          • URL Rewrite rules that depend on global settings
          • Web farms – specifically shared configuration

          You can find more details on the what the tool supports, as well as workarounds for some unsupported sites on the documentation page.

          You can also find more details on App Service migrations on the App Service Migration checklist.

          What’s next

          We plan to continue adding functionally to the tool in the coming months. With the most immediate priority being additional ASP.NET scenarios and support for additional web frameworks, such as Java and PHP.

          If you have any feedback on the tool or would like to suggest improvements, please submit your feature requests on our GitHub page.

          Azure Security Center exposes crypto miner campaign

          $
          0
          0

          Azure Security Center discovered a new cryptocurrency mining operation on Azure customer resources.
          This operation takes advantage of an old version of known open source CMS, with a known RCE vulnerability (CVE-2018-7600) as the entry point, and then after using the CRON utility for persistency, it mines “Monero” cryptocurrency using a new compiled binary of the “XMRig” open-source crypto mining tool.

          Azure Security Center (ASC) spotted the attack in real-time, and alerted the affected customer with the following alerts:

          • Suspicious file download – Possible malicious file download using wget detected
          • Suspicious CRON job – Possible suspicious scheduling tasks access detected
          • Suspicious activity – ASC detected periodic file downloads and execution from the suspicious source
          • Process executed from suspicious location

          Azure Security Center alert on a file downloaded and executed.

          The entry point

          Following the traces the attacker left behind, we were able to track the entry point of this malware and conclude it was originated by leveraging a remote code execution vulnerability of a known open source CMS - CVE-2018-7600.

          This vulnerability is exposed in an older version of this CMS and is estimated to impact a large number of websites that are using out of date versions. The cause of this vulnerability is insufficient input validation within an API call.

          The first suspicious command line we noticed on the effected Linux machines was:

          Base64 encoded bash command line (details censored).

          Decoding the base64 part of the command line reveals a logic of download and execution of a bash script file periodically, using the CRON utility:

          Base64 decoded bash command line (details censored) – wget | sh.

          The URL path also includes reference to the CMS name - another indication for the entry point (and for a sloppy attacker as well).

          We also learned, from the telemetries collected from the harmed machines, that this first command line executes within “apache” user context, and within the relative CMS working directory.

          We did an examination on the affected resources and discovered that all of them were running with an unpatched version of the relative CMS, which is exposed to a highly critical security risk that allows an attacker to run malicious code on the exposed resource.

          Malware analysis

          The malware uses the CRON utility (Unix job scheduler) for persistency by adding the following line to the CRON table file:

          Cron command running wget | sh.

          This results with the download and execution of a bash script file at every minute and allows the attacker to command and control using bash scripts.

          The malicious bash script file (details censored).

          The bash file (as we captured it in this time) downloads the binary file and executes it (As seen in the image above).
          The binary check if the machine is already compromised, and downloads using the HTTP 1.1 POST method, or another binary file depending on the number of processors the machine has.

          Malicious network traffic sniff.

          On first sight, the second binary seems to be more difficult to investigate since it’s clearly obfuscated. Luckily, the attacker chose to use UPX packer which focuses on compression and not on obfuscation.

          Malicioud binary packed with UPX packer.

          After de-packing the binary, we found a compilation of the open-source cryptocurrency miner “XMRig” in version 2.6.3. The miner compiles with the configuration inside it, and pulls the mining jobs from the mining proxy server, therefore we were unable to estimate the number of clients and earnings of the attacker.

          XMRig assembly code.

          The big picture

          By analyzing the behavior of several crypto miners, we have noticed 2 strong indicators for crypto miner driven attacks:

          1. Killing competitors – Many crypto-attacks assume that the machine is already compromised, and try to kill other computing power competitors. It does this by observing the process list, focusing on:

          1. Process name - From popular open source miners to less known mining campaigns
          2. Command line arguments such as known pool domains, crypto hash algorithms, mining protocol, etc.
          3. CPU usage consumption

          Another common method we identified is to reset the CRON tab – which in many cases is in use as a persistence method for other compute power competitors.

          2. Mining pools ­- Crypto mining jobs are being managed by the mining pool, which is responsible for gathering multiple clients to contribute and share the revenue across the clients. Most of the attackers use public mining pools which are simple to deploy and use, but once the attacker is exposed, his account might be blocked. Lately we noticed an increasing number of cases where attackers used their own proxy mining server. This technique helps the attacker stay anonymous, both from detection by a security product within the host (such as Azure Security Center Threat detection for Linux) and from detection by the public mining pool.

          Conclusion and prevention

          Preventing this attack is as easy as installing the latest security updates. A preferred option might be using SaaS (Software as a service) instead of maintaining a full web server and software environment.

          Crypto-miner activity is easy to detect most of the time since it consumes significant resources.
          Using a cloud security solution such as Azure Security Center, will continuously monitor the security of your machines, networks, and Azure services and will alert you when unusual activity is detected.

          Azure AI does that?

          $
          0
          0

          Five examples of how Azure AI is driving innovation

          Azure AI image

          Whether you’re just starting off in tech, building, managing, or deploying apps, gathering and analyzing data, or solving global issues —anyone can benefit from using cloud technology. Below we’ve gathered five cool examples of innovative artificial intelligence (AI) to showcase how you can be a catalyst for real change.

          Facial recognition

          You know that old box of photos you have sitting in the attic collecting cobwebs; the one with those beautifully embarrassing childhood photos half-covered by a misplaced thumb? How grateful would your family be if you could bring those back to life digitally, at the tip of your fingers? Manually scanning and downloading photos to all your devices would be a huge pain. And if those photos don’t have dates or the names of the people in them written on the back — forget it! But with AI algorithms, cognitive services, and facial recognition processes, organizing these photos by groups is super simple.

          By utilizing Azure’s Face API, facial recognition algorithms can quickly and accurately detect, verify, identify, and analyze faces. They can provide facial matching, facial attributes, and characteristic analysis in order to organize people and facial definitions into groups of similar faces.

          Handwriting analysis

          Already spent hours manually sorting through those old photos? Not to worry, another helpful tool in the Computer Vision API is the ability to take the papers and handwritten notes you’ve compiled throughout your last project and create a cohesive document. No longer will you need to decipher those scribbles from your teammates and scratch your head whether that obscure symbol is a four or a “u.”

          With Computer Vision API’s Recognizing Handwritten Text interface, you can conveniently take photos of handwritten notes, forms, whiteboards, sticky notes, that napkin you found, and anything in between. Rather than manually transcribing them, you can turn these documents into digital notes that are easy to comb through with a simple search. The interface can detect, extract, and digitally reproduce any type of handwriting—even Medieval Klingon! Imagine all the time and paper you will save!

          Text analysis

          A close cousin of the Handwriting API, the Text Analytics API allows for some pretty neat text analysis as well. Search through hundreds of documents, comb through customer reviews, tweets, and comments, and automatically identify posts for positive or negative sentiment by inputting just a few parameters. The API can also detect up to 120 different languages and identify things like if “times” refers to The New York Times or Times Square. Pretty cool, right?

          Translate languages

          Speaking of detecting different languages, the Translator Text API allows you to communicate with your colleagues from all over the map better than ever before. Start typing “Hello, it’s nice to meet you” into your app and the API can translate you and your colleagues’ entire conversation.

          The Translator Text API can show text in different alphabets, translate Chinese characters to PinYin, display any of the supported transliteration languages in the Latin alphabet, and even show words written in the Latin alphabet in non-Latin characters such as Japanese, Hindi, or Arabic, all with some simple code. The API can be integrated into your apps, websites, tools, and solutions and allows you to add multi-language user experiences in more than 60 languages. This API is used by companies, like eBay, worldwide for website localization, e-commerce, customer support, messaging applications, bots, and more to provide quick and automatic translations for all their worldly customers.

          Translator Text can also translate languages in real time through video/audio input so you can seamlessly communicate with colleagues around the world via video chat. It even converts video to written text, which makes content accessible for those who are hearing or visually impaired.

          AI for Good

          While all these services are great for automating business and personal projects, they can be used for much more. Last fall, Microsoft announced AI for Humanitarian Action: a new $40 million, five-year program that uses the power of AI to help the world recover from disasters, address the needs of children, protect refugees and displaced people, and promote respect for human rights. Part of this initiative is the AI for Good Suite, a five-year commitment to solve society’s biggest challenges using AI fundamentals.

          One of those challenges is being addressed by long-time Microsoft partner Operation Smile, a nonprofit dedicated to repairing cleft lips and palates across the globe. Through the use of machine vision AI and facial modeling, surgeons can compare pre- and post-surgery outcomes, rank the most optimal repairs, and provide that data back to Operation Smile. From there, the organization can identify their top-performing surgeons and enable them to teach others how to improve their cleft repair techniques through videos that can be accessed around the globe.

          Operation Smile is supercharging their doctors’ talents with technology to increase quality of life throughout the world. By utilizing AI, Operation Smile can help more children than ever before!

          With AI, the sky is the limit. And who knows—you just might discover the next best innovation in AI technology.

          Learn more

          Learn more about what you can do with Cognitive Services

          Get certified as an Azure AI Engineer

          Azure Stack IaaS – part seven

          $
          0
          0

          It takes a team

          Most apps get delivered by a team. When your team delivers the app through virtual machine (VMs), it is important to coordinate efforts. Born in the cloud to serve teams from all over the world, Azure and Azure Stack have some handy capabilities to help you coordinate VM operations across your team.

          Identity and single sign-on

          The easiest identity to remember is the one you use every day to sign in to your corporate network and check your email. If you are using Azure Active Directory, or your own active directory, your login to Azure Stack will be the same. This is something your admin sets up when the Azure Stack was deployed so you don’t have to learn and remember different credentials.

          Learn more about integrating Azure Stack with Azure Active Directory and Active Directory Federation Services (ADFS).

          Role-based access control

          In the virtualization days my team typically coordinated operations through credentials to VMs and the management tools. The Azure Resource Manager include a very robust role-based access control (RBAC) system that not only allows you to identify who can access the system, but allows you to assign people to roles and set a scope of control to define what they are allowed to do to what.

          Role-based access control in Azure and Azure Stack

          More than just people in my organization

          When you work in the cloud, you may need to collaborate with people from other organizations. As more and more things become automated, you might have to give a process, not a person, access to a resource. Azure and Azure Stack have you covered. The image below shows a VM where I have given access both to three applications (service principals) and a user from an external domain (foreign principal). 

          A virtual machine where access was given to both three applications (service principals) and a user from an external domain (foreign principal).

          Service principal

          When an application needs access to deploy or configure VMs, or other resource in your Azure Stack, you can create a service principal which is a credential for the application. You can then delegate only the necessary permissions to that service principal.

          As an example, you may have a configuration management tool that inventories VMs in your subscription. In this scenario, you can create a service principal, grant the reader role to that service principal, and limit the configuration management tool to read-only access.

          Learn more about service principals in Azure Stack.

          Foreign principal

          A foreign principal is the identity of a person that is managed by another authority. For example, the team at Contoso.com might need to allow access to a VM for a contractor or a partner from Fabrikam.com. In the virtualization days we would create a user account in our domain for that user, but that was a management headache. With Azure and Azure Stack you can allow users that sign in with their corporate credentials to access your VMs.

          Learn how to enable multi-tenancy in Azure Stack.

          Activity logs

          When your VM runs around the clock, you will have teams in at all times of the day. Fortunately, Azure and Azure Stack include an activity log that allows to track all changes that have been made to the VM and who initiated the action.

          Activity log in Azure and Azure Stack

          Learn more about Azure Activity Logs.

          Locks

          Sometimes people make errors, like deleting a production VM by mistake. A nice feature you will find in Azure and Azure Stack is the “lock.” A lock can be used to prevent any change or deletion on a VM or any other resource. When attempted, the user will get an error message until they manually remove the lock.

          Locks in Azure Stack

          Learn more about locking VMs and other Azure resources.

          Tags

          The best place to store additional data about your VM is in the tool you manage the VM from. Azure and Azure Stack provide you that ability to add additional information about your VM through the Tags feature. You can use Tags to help your team keep track of the deployment environment, support contacts, cost center, or anything else important. You can even search for these tags in the portal to find the right resources quickly.

          Tags are name/value pairs that enable you to categorize resources and view consolidated billing.

          Learn more about tagging VMs and other Azure resources.

          Work as a team, not individuals

          The team features in Azure and Azure Stack allows your team to elevate its game to deliver the best virtual machine operations. Managing an Infrastructure-as-a-Service (IaaS) VM is more than stop, start, and login. The Azure platform powering Azure Stack IaaS allows you to organize, delegate, and track your team’s operations so you can deliver a better experience to your users.

          In this blog series

          We hope you come back to read future posts in this blog series. Here are some of our past and upcoming topics:

          How Skype modernized its backend infrastructure using Azure Cosmos DB – Part 1

          $
          0
          0

          This is a three-part blog post series about how organizations are using Azure Cosmos DB to meet real world needs, and the difference it’s making to them. In this post (part 1 of 3), we explore the challenges Skype faced that led them to take action. In part 2, we’ll examine how Skype implemented Azure Cosmos DB to modernize its backend infrastructure. In part 3, we’ll cover the outcomes resulting from those efforts.

          Note: Comments in italics/parenthesis are the author's.

          Scaling to four billion users isn’t easy

          Founded in 2003, Skype has grown to become one of the world’s premier communication services, making it simple to share experiences with others wherever they are. Since its acquisition by Microsoft in 2010, Skype has grown to more than four billion total users, more than 300 million monthly active users, and more than 40 million concurrent users.

          People Core Service (PCS), one of the core internal Skype services, is where contacts, groups, and relationships are stored for each Skype user. The service is called when the Skype client launches, is checked for permissions when initiating a conversation, and is updated as the user’s contacts, groups, and relationships are added or otherwise changed. PCS is also used by other, external systems, such as Microsoft Graph, Cortana, bot provisioning, and other third-party services.

          Prior to 2017, PCS ran in three datacenters in the United States, with data for one-third of the service’s 4 billion users represented in each datacenter. Each location had a large, monolithic SQL Server relational database. Having been in place for several years, those databases were beginning to show their age. Specific problems and pains included:

          • Maintainability: The databases had a huge, complex, tightly coupled code base, with long stored procedures that were difficult to modify and debug. There were many interdependencies, as the database was owned by a separate team and contained data for more than just Skype, its largest user. And with user data split across three such systems in three different locations, Skype needed to maintain its own routing logic based on which user’s data it needed to retrieve or update.
          • Excessive latency: With all PCS data being served from the United States, Skype clients in other geographies and the local infrastructure that supported them (such as call controllers), experienced unacceptable latency when querying or updating PCS data. For example, Skype has an internal service level agreement (SLA) of less than one second when setting up a call. However, the round-trip times for the permission check performed by a local call controller in Europe, which reads data from PCS to ensure that user A has permission to call user B, made it impossible to setup a call between two users in Europe within the required one-second period.
          • Reliability and data quality: Database deadlocks were a problem—and were exacerbated because data used by PCS was shared with other systems. Data quality was also an issue, with users complaining about missing contacts, incorrect data for contacts, and so on.

          All of these problems became worse as usage grew, to the point that, by 2017, the pain had become unacceptable. Deadlocks were becoming more and more common as database traffic increased, which resulted in service outages, and weekly backups were leaving some data unavailable. “We did the best with what we had, coming up with lots of workarounds to deal with all the deadlocks, such as extra code to throttle database requests,” recalls Frantisek Kaduk, Principal .NET Developer on the Skype team. “As the problems continued to get worse, we realized we had to do something different.”

          In addition, the team faced a deadline related to General Data Protection Regulation (GDPR); the system didn’t meet GDPR requirements, so there was a deadline for shutting down the servers.

          The team decided that, to deliver an uncompromised user experience, it needed its own data store. Requirements included high throughput, low latency, and high availability—all of which had to be met regardless of where users were in the globe.

          An event-driven architecture was a natural fit, however, it would need to be more than just a basic implementation that stored current data. “We needed a better audit trail, which meant also storing all the events leading up to a state change,” explains Kaduk. “For example, to handle misbehaving clients, we need to be able to replay that series of events. Similarly, we need event history to handle cross-service/cross-shard transactions and other post-processing tasks. The events capture the originator of a state change, the intention of that change, and the result of it.”

          Continue on to part 2, which examines how Skype implemented Azure Cosmos DB to modernize its backend infrastructure.

          Viewing all 5971 articles
          Browse latest View live


          <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>