Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

AI, Machine Learning and Data Science Roundup: June 2019

$
0
0

A monthly roundup of news about Artificial Intelligence, Machine Learning and Data Science. This is an eclectic collection of interesting blog posts, software announcements and data applications from Microsoft and elsewhere that I've noted over the past month or so.

Open Source AI, ML & Data Science News

Tensorflow 2.0 beta is now available, featuring first-class Keras support and with eager execution enabled by default.

MLflow 1.0, the open source platform for managing end-to-end machine learning lifecycles from Databricks, is now available.

Industry News

Facebook open sources PyRobot, a framework that enables AI researchers and students to control a physical robot with a few lines of Python code.

Databricks Connect, a new universal Spark client library with bindings in Python, Scala, Java and R to manage data and compute in hosted Databricks environments.

Google introduces Deep Learning Containers (beta), which come with a Jupyter environment pre-configured to interact with the GCP AI platform.

GCP AI Platform Notebooks, Google's managed JupyterLab service, now support the R language (beta).

Amazon releases GluonTS, an open-source Python toolkit for building deep-learning based time series models.

Microsoft News

Microsoft introduces Immersive Reader, a new Azure Cognitive Service that allows developers to provide assisted reading experiences to non-native speakers and people with dyslexia, ADHD, or visual impairment.

Microsoft open-sources TensorWatch, a Jupyter-based debugging and visualization tool to observe machine learning and deep learning training in process.

Azure Machine Learning Service introduces new time series forecasting capabilities.

Power BI adds AI capabilities: cognitive text and image analysis, and consumption of models from Azure ML Services.

InterpretML, a new open-source Python package for training interpretable models and explaining black-box systems from Microsoft Research.

MLOps, an extension to Azure DevOps for orchestration and management of models in Azure ML Service, such as this Video Anomaly Detection example.

Learning resources

ONNX.JS: demos with code of running GPU-accelerated inference on ONNX models in the browser.

Tutorial: using the new Automated Machine Learning web user interface in the Azure portal.

Video recordings from the 2019 New York R Conference are available to view.

Feature Engineering and Selection, a new book Max Kuhn and Kjell Johnson with implementations in R.

An overview of datatable, Python's version of the R package for efficient, multithreaded processing of out-of-memory datasets.

Learn R and Python in Parallel, an online book by Nailong Zhang useful for learning one language based on your knowledge of the other.

Mastering Shiny, a new book in progress by Joe Cheng, developer of the interactive UI framework for R.

Exploring Data with R, an introduction to R from MSDN Magazine.

Applications

A new paper by leading researchers suggests 10 domains where AI could be applied to address the threat of climate disruption.

Uber uses causal inference in product development, operations analysis and improving user experiences.

The Future Computed: AI and Manufacturing, a 135-page Microsoft e-book featuring applications of AI in manufacturing.

Try out GauGAN, NVIDIA's style transfer algorithm that converts a crude finger-painting into a realistic landscape.

MASS, a new pre-training method that outperforms BERT and GPT in generating realistic natural language text.

FUNIT (Few-Shot Unsupervised Image-to-Image Translation), a NVIDIA research project used to convert images of one animal (or even a human face) to other breeds/species.

Neural Code Search, a Facebook model for using natural language search to find relevant computer code.

Google Research Football, an open-source reinforcement learning simulator to teach an AI agent to play a computer soccer game.

The Dalí Museum in Florida creates an interactive simulation of the iconic artist from historical footage (video).

Editor's note: The monthly roundup will return in August. Find previous editions of the monthly AI roundup here.


New to Microsoft 365 in June—updates to Microsoft Cloud App Security, PowerPoint, Outlook, and more

What’s new in Azure DevOps Sprint 153

$
0
0

Sprint 153 has just finished rolling out to all organisations and you can check out all the cool features in the release notes. Here are just some of the features that you can start using today.

Support for queries with tree of work items

You can use queries that group work items into a hierarchy in a wiki page. You can embed queries with a tree of work items in a wiki page to show Epics and Features along with the child Tasks and User Stories.

Top publisher certification program

We’ve created a Top Publisher program in the Marketplace to help you evaluate or acquire Azure DevOps extensions/integrations with confidence. The Top Publisher badge implies that the publisher has shown commitment to their customers and the Marketplace through exemplary policies, quality, reliability, and support. Marketplace assigns the badge to a publisher after carefully reviewing the publisher across a variety of parameters. Read more about the Top Publisher program here.

Quickly view linked GitHub activity from the Kanban board

When reviewing the Kanban board yourself or as a team, you often have questions like “has this item started development yet?” or “is this item in review yet?” With the new GitHub annotations on the Kanban board, now you can get a quick sense of where an item is and directly navigate to the GitHub commit, pull request, or issue for more detail. Check out the documentation here and install the Azure Boards app from the GitHub marketplace to get started.

Copy work items with attachments and links + Preview text files on work item

Sometimes you may need to create a copy of a work item and include minor changes to the new work item. Previously, you could only copy the work item’s content and links. Now, you can copy attachments as well.

These are just the tip of the iceberg, and there are plenty more features that we’ve released in Sprint 153. Check out the full list of features for this sprint in the release notes.

The post What’s new in Azure DevOps Sprint 153 appeared first on Azure DevOps Blog.

.NET Framework June 27, 2019 Cumulative Update for Windows 10 version 1903

$
0
0

Today, we released the June 2019 Cumulative Update for .NET Framework 3.5 and 4.8 on Windows 10 version 1903.

Quality and Reliability

This release contains the following quality and reliability improvements.

  • Improves the memory allocation and cleanup scheduling behavior of the weak-event pattern. To opt-in to these improvements, set AppContext switches to “true”: Switch.MS.Internal.EnableWeakEventMemoryImprovements and Switch.MS.Internal.EnableCleanupSchedulingImprovements.
  • Addresses an InvalidOperationException that can occur during weak-event cleanup, if called re-entrantly while a weak-event delivery is in progress.
  • Addresses InvalidOperationException errors in System.Web.Hosting.RecycleLimitMonitor+RecycleLimitMonitorSingleton.AlertProxyMonitors. Worker processes for ASP.Net 4.7 and later are vulnerable to unexpected crashes because of this exception if the worker process consumes close to its configured Private Bytes recycling limit and application domains are being created or recycled (perhaps because of config uration file changes, or the presence of more than one application per worker process).
  • Addresses an issue in which it was possible for a Workflow Service to get into a looping situation if an unhandled exception occurs during Cancel processing. To break this cycle, the Web.config file for the workflow service can have the following AppSetting specified that will cause the workflow service instance to terminate, instead of abort, if an unhandled exception occurs during Cancel processing:<appSettings>
    <add key=”microsoft:WorkflowServices:TerminateOnUnhandledExceptionDuringCancel” value=”true”/>
    </appSettings>
  • Addresses multiple Accessibility and High DPI awareness improvements for Windows Forms and Windows Presentation Foundation (WPF) applications.

Getting the Update

The Cumulative Update is available via Windows Update, Windows Server Update Services (WSUS) and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog.

Product Version Cumulative Update
Windows 10 1903 (May 2019 Update)
.NET Framework 3.5, 4.8
Catalog
4502584

Previous Monthly Rollups

The last .NET Framework update for Windows 10 version 1903 is listed below for your convenience:

The post .NET Framework June 27, 2019 Cumulative Update for Windows 10 version 1903 appeared first on .NET Blog.

Introducing tracking prevention, now available in Microsoft Edge preview builds

$
0
0

Today, we’re releasing an experimental preview of tracking prevention for Microsoft Edge. We initially demoed this feature at Build 2019 as one of the concepts we’re exploring to offer greater transparency and control over your online data. Microsoft Edge Insiders can now try out tracking prevention by enabling the experimental flag on Microsoft Edge preview builds starting with version 77.0.203.0 (today’s Canary channel release). (Note: Today’s Canary release is not currently available for macOS due to a build issue. Tracking prevention will be available in the next update to the Canary channel on macOS.)

Tracking prevention is designed to protect you from being tracked by websites that you aren’t accessing directly. Whenever a website is visited, trackers from other sites may save information in the browser using cookies and other storage mechanisms. This information may include the sites you’ve visited and the content you’re interested in, building a digital profile which can be accessed by organizations to offer personalized content when visiting other sites.

The implementation in Microsoft Edge Insider preview builds is early and is likely to change as we hear from our customers and continue to test the feature. For that reason, it’s currently behind an experimental flag and disabled by default. There may be some bugs or site issues, but we want to get it into your hands to hear what you think.

Turning on tracking prevention

To try out tracking prevention, you’ll need to be on a Microsoft Edge Insider preview build (version 77.0.203.0 or higher – or at least today’s Canary channel release). Once you’re on the right build, you’ll need to manually enable the experiment.

In the address bar, enter edge://flags#edge-tracking-prevention to open the experimental settings page. Click the dropdown and choose Enabled, then click the Relaunch Now button to close all Microsoft Edge windows and relaunch Microsoft Edge.

Screenshot showing edge://flags

Tracking prevention can be enabled via edge://flags

That’s it! Once the tracking prevention experiment is enabled, you can go to the Microsoft Edge privacy settings page to control settings for tracking prevention. In the address bar, enter edge://settings/privacy and adjust the settings as desired:

Screenshot showing settings for tracking prevention (with "Basic," "Balanced," and "Strict" options).

The default tracking prevention setting is Balanced, which blocks 3rd party trackers and known malicious trackers for an experience that balances privacy and web compatibility. You can customize tracking prevention to your preferences by setting it to Strict, which blocks the majority of 3rd party trackers, or Basic, which only blocks malicious trackers.

How it works

When blocking a tracker, we aim to stop it from accessing previously stored tracking information and storing new tracking information. When tracking resources don’t add meaningful functionality to the page, we may even block them entirely. In order to do this, tracking prevention is made up of three main components.

  • Classification: How we determine what is considered a tracking URL.
  • Enforcement: The actions we take to protect our users from trackers.
  • Mitigations: The mechanisms we use to make sure your favorite sites still work, while offering strong default protection.

Classification

We’ve added a new component to Microsoft Edge, Trust Protection Lists, that contains the latest information on which organizations may be trying to track users on the web. This component allows us to be flexible with where we source details on what a tracker is and when we deliver updated lists to our users.

To check if the URL is considered a tracker by our classification system, we check a series of hostnames, starting with an exact match and then proceeding to check for partial matches for up to 4 labels beyond the top-level domain.

Example:

URL: https://a.subdomain.of.a.known.tracker.test/some/path

Tested hostnames:

    • subdomain.of.a.known.tracker.test
    • a.known.tracker.test
    • known.tracker.test
    • tracker.test
    • test

If any of those hostnames represents a known tracker, we proceed with evaluating enforcement actions intended to prevent the user from being tracked.

Enforcement

To provide protection for our users from tracking actions on the web, we take two enforcement actions against trackers:

  • Restrict storage access: If a known tracking resource tries to access any web storage where it may try to persist data about the user, we will block that access. This includes restricting the ability for that tracker to get or set cookies as well as access storage APIs such as IndexedDB and localStorage.
  • Block resource loads: If a known tracking resource is being loaded on a website, we may block that load before the request reaches the network depending on its compatibility impact and the tracking prevention setting you have set. Blocked loads may include tracking scripts, “pixels”, iframes, and more. This prevents any data potentially being sent to the tracking domain and may even improve load times and performance of the page as a side effect.

You can view the number of trackers blocked on a page by clicking the page info button next to the URL in the address bar at the top of the browser. Here you can change the tracking prevention setting on a site by site basis if you trust a site, or if something doesn’t seem to be working properly.
Screenshot showing the "page info" button (lock icon) next to the URL in the address bar.

Mitigations

The web is a complex place and we realize there is no “one size fits all” solution to privacy. Depending on the mode of tracking prevention you enable, we will take different actions to balance our enforcement and put you in control of your personal experience on the web.

Tracker categorization

Every tracking resource is classified into a category that best represents the type of tracking activities it performs. Every tracking prevention mode uses a set of categories to represent what types of trackers will have storage access restricted or resource loads blocked.

Not all types of trackers are equal. Fingerprinting trackers are those trackers that attempt to identify you, or your browser based on its unique characteristics. Cryptomining scripts are scripts that attempt to abuse your processor and memory to generate cryptocurrencies, reducing your browser’s performance and battery life. Even when you’ve opted into our Basic mode you will be protected from these egregious types of tracking.

By default, in Balanced, all users will get a robust set of tracker categories that have storage access blocked, and a slightly smaller set that have resource loads blocked. We have taken care to ensure that these sets provide protection while ensuring compatibility as you browse the web and use your favorite applications. For example, Balanced will allow third party content to enable login flows using third party identities or social network commenting on third party sites.

Our Strict mode provides the largest set of categories to block storage access and resource loads. This is for users who don’t mind a little bit of site breakage in exchange for greater protection. This is also the default level of protection when you launch an InPrivate window.

Organizations

Not all organizations do business on the internet using just one domain name. In order to help keep sites working smoothly, we group domains owned and operated by the same organization together. For instance, we might have a grouping that says “Org1” owns the domains “org1.test” and “org1-cdn.test”. If the user visits https://org1.test/, and it tries to load a resource from https://org1-cdn.test/, we won’t take any enforcement actions against that auxiliary domain even though it’s not a first party URL. However, if another organization, Org2 (https://org2.test/), tries to load that same resource, it would be subject to restrictions because it is not part of the same organization.

We are currently experimenting with ways to provide even greater privacy protection by investigating opportunities to expand the types of trackers we block for you. For the Balanced setting, we may start to consider your recent interactions with sites. For example, for sites that you interact with in a first party context on a regular basis, access to cookies, localStorage, IndexedDB and other storage may be allowed in a broader context to ensure web functionality, like login flows or social network commenting, just works. For sites you don’t visit, we may more aggressively block that content in a third-party context. This will let us improve protection while the sites you care about continue to work across the web.

For our enterprise customers, we are experimenting with exposing policies to allow the right balance of control in order to ensure all their users are protected and existing line of business apps continue to work.

Debugging tools

In order to help web developers identify trackers on their websites that may be affected by this feature, we’ve added some DevTools console messages to show when enforcement actions are taken. These can be used to see exactly what was restricted and help identify which parts of a site may need to be better tailored towards protecting a user’s privacy.
Screenshot showing the Edge DevTools with console messages for tracking prevention enforcement actions.

Send us feedback

We want to hear from you about this feature. If you think something’s not working right or it’s blocking too much or too little, please send us feedback using the “smiley face” icon in the top right corner of the browser.

Screenshot showing the "smiley face" feedback icon in Microsoft Edge

If you’re a web developer, try out the DevTools experience with tracking preventing enabled and let us know what you think. If you’re a web surfer, catch some waves and let us know how tracking prevention fits into your browsing habits.

We’ll use your feedback on this experimental feature in the Canary and Dev channels to understand potential impact to web compatibility and iterate on the experience to be helpful and easy to use.

As we gather feedback and continue to tune the feature, we will begin rolling out tracking prevention to a broader audience.

Thanks for being a part of this early preview!

– Brandon Maslen, Senior Software Engineer
– Ryan Cropp, Software Engineer

The post Introducing tracking prevention, now available in Microsoft Edge preview builds appeared first on Microsoft Edge Blog.

Top Stories from the Microsoft DevOps Community – 2019.06.28

$
0
0

This week was a busy week in Azure DevOps! Thanks to this vibrant community, it was difficult to choose the top stories (what a great problem to have). If I’ve missed anything important, please feel free to send it my way, I am @DivineOps on Twitter.

GDBC: Azure learnings from running at scale
Let’s start with the recap of the Global DevOps Bootcamp 2019, delivered at a 100 (!) venues around the globe on June 15th. In the spirit of continuous improvement, the team gathered the event feedback from 2018, and worked hard to improve the attendee experience. The recap highlights the process of pushing the quotas of Azure Subscriptions and Azure DevOps organizations to their limits, and would be of great use to anyone delivering a large-scale training event.

Using Azure DevOps from the Command Line
Have you tried the az devops Azure CLI extension yet? This post by George Verghese walks you through the process of installing and configuring the extension. With Azure DevOps CLI you can queue your Builds, list your Build Agents, manage your Branch Policy and so much more from the command line! For more details, please also refer to az devops docs.

pytest-azurepipelines 0.7.0
For the Python lovers out there, this PyTest plugin can significantly improve your experience by making the test results available in Azure Pipelines UI! This neat plugin automatically uploads your test results and code coverage data, and formats the test data to display passes and failures on Azure Pipelines Test tab. Lot’s of kudos to Anthony Shaw for all the hard work on this!

Azure DevOps Hidden Gems #2 – Run Build or Release Tasks According to Custom Conditions
Need to create different Builds for different circumstances? You may be able to customize the existing pipeline to suit your needs! This little gem by Dr. Graham Smith highlights the capability of Azure Pipelines to execute pipeline tasks based on custom conditions. This feature comes in handy if you need to run different tasks for different branches, Build trigger types, or even custom variable values.

Passing variables from stage to stage in Azure DevOps Release Pipelines
What happens when you need to pass a variable from one pipeline stage to the next? Azure Pipelines offer a number of ways to define variables, but updating the values on the fly can be a bit tricky, especially since different pipeline stages may execute at different times and on different Build Agents. To follow up on the earlier post by Donovan Brown, this blog post by Stefan Stranger walks through the process of updating and reading the variable values using the Azure DevOps REST API.

Perfecting Continuous Delivery of NuGet packages for Azure Artifacts
Are you working on your package management strategy? This article by Utkarsh Shigihalli features a detailed walkthrough of the entire process, including the developer workflow, package versioning strategy, package CI/CD pipeline, and setting permissions for developers to access packages based on @local, @prerelease and @release tags.

Scaling from 2,000 to 25,000 engineers on GitHub at Microsoft
Last but not least, this article by Jeff Wilcox walks through the process of scaling the Microsoft Open Source contributions from 2000 to 25000 engineers. Jeff reviews the principles we relied on to enable more Microsoft teams to open source their products, contribute to other open source efforts and leverage open source libraries, and what we’ve learned along the way.

If you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.06.28 appeared first on Azure DevOps Blog.

Git is case-sensitive and your filesystem may not be – Weird folder merging on Windows

$
0
0

I was working on DasBlog Core (an .NET Core cross-platform update of the ASP.NET WebForms-based blogging software that runs this blog) with Mark Downie, the new project manager, and Shayne Boyer. This is part of a larger cloud re-architecture of hanselman.com and the systems that run this whole site.

Shayne was working on getting a DasBlog Core CI/CD (Continuous Integration/Continuous Development) running in Azure DevOps' build system. We wanted individual build pipelines to confirm that DasBlog Core was in fact, cross-platform, so we needed to build, test, and run it on Windows, Linux, and Mac.

The build was working great on Windows and Mac...but failing on Linux. Why?

Well, like all things, it's complex.

  • Windows has a case-insensitive file system.
  • By default, Mac uses a case-insensitive file system.

Since Git 1.5ish there's been a setting

git config --global core.ignorecase true

but you should always be aware of what a setting does before you just set it.

If you're not careful, you or someone on your team can create a case sensitive file path in your git index while you're using a case insensitive operating system like Windows or Mac. If you do this, you'll be able to end up with two separate entries from git's perspective. However Windows will silently merge them and see just one.

Here's our themes folder structure as seen on GitHub.com.

Case insenstive folder names

But when we clone it on Mac or Windows, we see just one folder.

DasBlog as a single folder in VS Code

Turns out that six months ago one of us introduced another folder with the name dasblog while the original was DasBlog. When we checked them on Mac or Windows the files ended up in merged into one folder, but on Linux they were/are two, so the build fails.

You can fix this in a few ways. You can rename the file in a case-sensitive way and commit the change:

git mv --cached name.txt NAME.TXT

Please take care and back up anything you don't understand.

If you're renaming a directory, you'll do a two stage rename with a temp name.

git mv foo foo2

git mv foo2 FOO
git commit -m "changed case of dir"

Be safe out there!


Sponsor: Looking for a tool for performance profiling, unit test coverage, and continuous testing that works cross-platform on Windows, macOS, and Linux? Check out the latest JetBrains Rider!



© 2019 Scott Hanselman. All rights reserved.
     

Helping move healthcare organizations to Azure

$
0
0

Today’s healthcare organizations are expected to be agile, reduce costs, and direct capital toward revenue generating activities that improve patient outcomes. The cloud is a key part of the answer, but implementing a new solution on the cloud also requires new skills especially around governance, compliance with HIPAA, and security practices. Many healthcare organizations look to an experienced partner to help them migrate solutions from on-premises to the cloud, while building in the right set of structures to seamlessly handle known and future challenges.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Wanted: Governance and compliance expertise

For organizations that have moved to the cloud, a lack of governance and understanding about the way cloud services work can lead to wasted spending, unpredictable cloud service bills, and cloud vendor lock-in. The rapid growth of cloud infrastructures also creates a dizzying array of possibilities that can keep a team uncertain of the correct path and second guessing their choices, which can lead to delay and add risk of failure.

Now, healthcare CIOs increasingly rely on cloud platforms, but they run into new problems. To prevent the inevitable difficulties requires a staff that is fully enabled with the right skills for compliance, privacy, and security. Health IT professionals need guidance on how to move an on-premises healthcare infrastructure to a cloud platform, and ensure HIPAA compliance, policies, safeguards, and resources are in place.

Here are the major areas that require thought and planning:

  • Privacy, compliance concerns: Protecting patient data is a persistent concern, along with implementation, uncertainty, and risk. Concerns about HIPAA compliance, cloud, and legacy system integration are among the major obstacles that have kept healthcare IT on-premises.
  • Budget constraints, cost optimization: Cloud service bills are often highly detailed and complicated, making it difficult to determine which application, department, or resource is the source of a cost overrun.
  • Technical hurdles: Healthcare IT professionals may not have the skills or resources to leverage cloud services to do things like extend an on-premises datacenter to a hybrid cloud.
  • Training: Retaining and enabling IT staff is a key challenge, and education on any new solution is critical to success. Everyone should have easy to understand resources regardless of the role whether it be IT leaders, administrators, developers, and/or database administrators.
  • Gaps in capabilities: Even with an on-premises solution, many use special services from a vendor. Planning should include those partners as well as specialized areas that the vendors don’t currently address.

Solution

Burwood Group is a Microsoft partner that specializes in moving healthcare organizations to Azure. If a client has a secure, on-premises network, Burwood will build a secure cloud network and leverage the same regulatory controls used for an on-premises installations. They will also educate technology teams on endpoint security and serverless security, with emphasis on HIPAA compliance in the cloud.

The consulting firm offers extensive training. For example, through a one-day class, they provide the basic education to have a successful implementation in Azure, with an emphasis on healthcare requirements in the cloud. This workshop includes hands-on lab exercises and is 100 percent focused on pertinent, practical, and actionable knowledge.

Benefits

  • Standardization: As a cloud team, nothing is left to guess work. Instead, consistency is instilled across the team. Through education, Burwood introduces the healthcare datacenter in Azure.
  • Flexibility: IT teams may need to work with multiple cloud architectures for healthcare. This occurs as care is increasingly managed across settings with more interoperability across applications and business entities. Understanding best practices for the cloud allows expertise that is independent of any application or vendor.
  • Control: When it comes to cloud governance for healthcare, organizations need to control cloud sprawl. As personnel enter or leave an organization, permissions must be carefully allowed or revoked to prevent security breaches. Burwood provides education on these subjects: What is going into and out of Azure? Who has rights to resources in Azure? These types of questions are answered.
  • Service catalog: Burwood seeks to keep users informed of new services through a service catalog. Users are instructed about the following.
    1. Handling cloud service requests and change management.
    2. Expanding the current service catalog through an Azure for healthcare IT emphasis.
    3. Potential items that users can request through the service catalog in Azure.
  • Indexing: All resources in the cloud must be tagged with cost center, creation date, and more.
  • IP awareness: Users are instructed to be very careful of public IP address assignments, and the potential of creating vulnerabilities.

Services

The company has a proficiency in both healthcare and Azure technology. These are a few of the Azure services used to create custom solutions:

Next steps

To learn more about other industry solutions, go to the Azure for healthcare page. To find more details about consulting and a one day Azure University for healthcare workshop, go to the Azure Marketplace listing for the Burwood Group and select Contact me.


Azure Cost Management updates – June 2019

$
0
0

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management comes in.

We're always looking for ways to learn more about your challenges and how Cost Management can help you better understand where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less.

Here are the improvements that we'll be looking at today, all based on your feedback:

Let's dig into the details.

 

Reservation and marketplace purchases for Enterprise Agreements and AWS

Effective cost management starts by getting all your costs into a single place with a single taxonomy. Now, with the addition of reservation and marketplace purchases, you have a more complete picture of your Enterprise Agreements (EA) for Azure and AWS costs, and can track large reservation costs back to the teams using the reservation benefit. Breaking reservation purchases down will simplify cost allocation efforts, making it easier than ever to manage internal chargeback.

Showing amortized costs of $243M for the same period above which showed just under $50K of actual costs. Virtual machines are now showing costs based on a pre-purchased reservation.

Start by opening cost analysis and changing scope to your EA billing account, AWS consolidated account, or a management group which spans both. You'll notice four new grouping and filtering options to break down and drill into costs:

  • Charge type indicates which costs are from usage, purchases, and refunds.
  • Publisher type indicates which costs are from Azure, AWS, and marketplace. Marketplace costs include all clouds. Use Provider to distinguish between the total Azure and AWS costs, and first and third-party costs.
  • Reservation specifies what the reservation costs are associated with, if applicable.
  • Frequency indicates which costs are usage-based, one-time fees, or recurring charges.

By default, cost analysis shows your actual cost as it is on your bill. This is ideal for reconciling your invoice, but results in visible spikes from large purchases. This also means usage against a reservation will show no cost, since it was prepaid, and subscription and resource group readers won't have any visibility into their effective costs. This is where amortization comes in.

Switch to the amortized cost view to break down reservation purchases into daily chunks and spread them over the duration of the reservation term. As an example, instead of seeing a $365 purchase on January , you will see a $1 purchase every day from January 1 to December 31. In addition to basic amortization, these costs are also reallocated and associated with the specific resources which used the reservation. For example, if that $1 daily charge is split between two virtual machines, you'll see two $0.50 charges for the day. If part of the reservation is not utilized for the day, you'll see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a new charge type titled UnusedReservation.

As an added bonus subscription, resource group, and AWS linked account readers can also see their effective costs by viewing amortized costs. They won't be able to see the purchases, which are only visible on the billing account, but they can see their discounted cost based on the reservation.

To build a simple chargeback report, switch to amortized cost, select no granularity to view the total costs for the period, group by resource group, and change to table view. Then, download the data to Excel or CSV for offline analysis or to merge with your own data.

An image of the amortized cost page, table view.

If you need to automate getting cost data, you have two options. Use the Query API for rich analysis with dynamic filtering, grouping, and aggregation or use the UsageDetails API for the full, unaggregated cost and usage data. Note UsageDetails is only available for Azure scopes. The general availability (GA) version of these APIs is 2019-01-01, but you'll want to use 2019-04-01-preview to include reservation and Marketplace purchases.

As an example, let's get an aggregated view of amortized costs broken down by charge type, publisher type, resource group—left empty for purchases, and reservation—left empty if not applicable.

POST https://management.azure.com/{scope}/providers/Microsoft.CostManagement/query?api-version=2019-04-01-preview
Content-Type: application/json

{
  "type": "AmortizedCost",
  "timeframe": "Custom",
  "timePeriod": { "from": "2019-06-01", "to": "2019-06-30" },
  "dataset": {
    "granularity": "None",
    "aggregation": {
      "totalCost": { "name": "PreTaxCost", "function": "Sum" }
    },
    "grouping": [
      { "type": "dimension", "name": "ChargeType" },
      { "type": "dimension", "name": "PublisherType" },
      { "type": "dimension", "name": "Frequency" },
      { "type": "dimension", "name": "ResourceGroup" },
      { "type": "dimension", "name": "SubscriptionName" },
      { "type": "dimension", "name": "SubscriptionId" },
      { "type": "dimension", "name": "ReservationName" },
      { "type": "dimension", "name": "ReservationId" }
    ]
  }
}

And if you don't need the aggregation and prefer the full, raw dataset for Azure scopes:

GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDetails?metric=AmortizedCost&$filter=properties/usageStart+ge+'2019-06-01'+AND+properties/usageEnd+le+'2019-06-30'&api-version=2019-04-01-preview

If you need actual costs to show purchases as they are shown on your bill, simply change the type or metric to ActualCost. For more information about these APIs, refer to the Query and UsageDetails API documentation. The published docs show the GA version, but they both work the same for the 2019-04-01-preview API version outside of the new type/metric attribute.

Note that Cost Management APIs work across all scopes above resources. Namely, resource group, subscription, management group via Azure roll-based access control (RBAC) access, EA billing accounts (enrollments), departments, enrollment accounts via EA portal access, AWS consolidated, and linked accounts via Azure RBAC. To learn more about scopes, including how to determine your scope ID or manage access, see our documentation "Understand and work with scopes."

Support for reservation and marketplace purchases is currently available in preview in the Azure portal, but will roll out globally in the coming weeks. In the meantime, please check it out and let us know if you have any feedback.

 

Forecasting your Azure and AWS costs

History teaches us a lot, and knowing where you've been is critical to understanding where you're going. This is no less true when it comes to managing costs. You may start with historical costs to understand application and organization trends, but to really get into a healthy, optimized state, you need to plan for the future. Now you can with Cost Management forecasts.

Check your forecasted costs in cost analysis to anticipate and visualize cost trends, and proactively take action to avoid budget or credit overages on any scope. From a single application in a resource group, to the entire subscription or billing account, to higher-level management groups spanning both Azure and AWS resources. Learn about connecting your AWS account in last month's wrap up here.

Cost analysis showing accumulated costs of $14.7M with a forecast of $17.9M and a warning note on the budget, which is set at $17.5M.

Cost Management forecasts are in preview in the Azure portal, and will roll out globally in the coming weeks. Check it out and let us know what you'd like to see next.

 

Standardizing cost and usage terminology for Enterprise Agreement and Microsoft Customer Agreement

Depending on whether you use a pay-as-you-go (PAYG), Enterprise Agreement (EA), Cloud Solution Provider (CSP), or Microsoft Customer Agreement (MCA) account, you may be used to a different terminology. These differences are minor and won't impact your ability to understand and break down your bills, but they do introduce a challenge as your organization grows and needs a more holistic cost management solution, spanning multiple account types. With the addition of AWS and eventual migration of PAYG, EA, and CSP accounts into MCA, this becomes even more important. In an effort to streamline the transition to MCA at your next EA renewal, Cost Management now uses new column or property names to align to MCA terminology. Here are the primary differences you can expect to see for EA accounts:

  • EnrollmentNumber → BillingAccountId/BillingProfileId
    • EA enrollments are represented as "billing accounts" within the Azure portal today, and they will continue to be mapped to a BillingAccountId within the cost and usage data. No change there. MCA also introduces the ability to create multiple invoices within a billing account. The configuration of these invoices is called a "billing profile". Since EA can only have a single invoice, the enrollment effectively maps to a billing profile. In line with that conceptual model, the enrollment number will be available as both a BillingAccountId and BillingProfileId.
  • DepartmentName → InvoiceSectionName
    • MCA has a concept similar to EA departments, which allows you to group subscriptions within the invoice. These are called "invoice sections" and are nested under a billing profile. While the EA invoice isn't changing as part of this effort, EA departments will be shown as InvoiceSectionName within the cost data for consistency.
  • ProductOrderName (new)
    • New property to identify the larger product the charge applies to, like the Azure subscription offer.
  • PublisherName (new)
    • New property to indicate the publisher of the offering.
  • ServiceFamily (new)
    • New property to group related meter categories.

Organizations looking to renew their EA enrollment into a new MCA should strongly consider moving from the key-based EA APIs (such as consumption.azure.com) to the latest UsageDetails API (version 2019-04-01-preview) based on these new properties to minimize future migration work. The key-based APIs are not supported for MCA billing accounts.

To learn more about the new terminology, see our documentation "Understand the terms in your Azure usage and charges file."

 

Keeping an eye on costs across subscriptions with management group budgets

Every organization has a bottom line. Cost Management budgets help you make sure you don't hit yours. And now, you can create budgets that span both Azure and AWS resources using management groups.

Management group budgets

Organize subscriptions into management groups, and use filters to perfectly tune the budget that's right for your teams.

To learn more, see our tutorial "Create and manage budgets."

 

Updating your dashboard tiles

You already know you can pin customized views of cost analysis to the dashboard.

Pin cost analysis to the dashboard using the pin icon at the top-right of the blade

You may have noticed these tiles were locked to the specific date range you selected when pinning it. For instance, if you chose to view this month's costs in January, the tile would always show January, even in February, March, and so on. This is no longer the case.

Cost analysis tiles now maintain the built-in range you selected in the date picker. If you pin "this month," you'll always get the current calendar month. If you pin "last 7 days," you'll get a rolling view of the last 7 days. If you select a custom date range, however, the tile will always show that specific date range.

To get the updated behavior, please update your pinned tiles. Simply click the chart on the tile to open cost analysis, select the desired date range, and pin it back to the dashboard. Your new tile will always keep the exact view you selected.

What else would help you build out your cost dashboard? Do you need other date ranges? Let us know.

 

Expanded availability of resource tags in cost reporting

Tagging is the best way to organize and categorize your resources outside of the built-in management group, subscription, and resource group hierarchy, allowing you to add your own metadata and build custom reports using cost analysis. While most Azure resources support tags, some resource types do not. Here are the latest resource types which now support tags:

  • App Service environments
  • Data Factory services
  • Event Hub namespaces
  • Load balancers
  • Service Bus namespaces

Remember tags are a part of every usage record and are only available in Cost Management reporting after the tag is applied. Historical costs are not tagged, so update your resources today for the best cost reporting.

 

The new Cost Management YouTube channel

Last month, we talked about eight new quickstart videos to get you up and running with Cost Management quickly. Subscribe to the new Azure Cost Management YouTube channel to stay in the loop with new videos as they're released. Here's the newest video in our cost optimization collection:

Let us know what other topics you'd like to see covered.

 

What's next?

These are just a few of the big updates from the last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming! 

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks. And, as always, share your ideas and vote up others in the Cost Management feedback forum.

Azure. Source–Volume 89

$
0
0

Dear Azure fans, Azure.Source is going on hiatus. Thank you for reading each week and be sure to follow @Azure for updates and new ways to learn more.

Now available

Announcing the general availability of Azure premium files

We are excited to announce the general availability of Azure premium files for customers optimizing their cloud-based file shares on Azure. Premium files offers a higher level of performance built on solid-state drives (SSD) for fully managed file services in Azure.

Premium tier is optimized to deliver consistent performance for IO-intensive workloads that require high-throughput and low latency. Premium file shares store data on the latest SSDs, making them suitable for a wide variety of workloads like databases, persistent volumes for containers, home directories, content and collaboration repositories, media and analytics, high variable and batch workloads, and enterprise applications that are performance sensitive. Our existing standard tier continues to provide reliable performance at a low cost for workloads less sensitive to performance variability, and is well-suited for general purpose file storage, development/test, backups, and applications that do not require low latency.

Leveraging complex data to build advanced search applications with Azure Search

Data is rarely simple. Not every piece of data we have can fit nicely into a single Excel worksheet of rows and columns. Data has many diverse relationships, such as the multiple locations and phone numbers for a single customer .or multiple authors and genres of a single book. Of course, relationships typically are even more complex than this, and as we start to leverage AI to understand our data the additional learnings we get only add to the complexity of relationships. For that reason, expecting customers to have to flatten the data so it can be searched and explored is often unrealistic. We heard this often and it quickly became our number one most requested Azure Search feature. Because of this we were excited to announce the general availability of complex types support in Azure Search. In this post, we explain what complex types adds to Azure Search and the kinds of things you can build using this capability.

Azure Blockchain Workbench 1.7.0 integration with Azure Blockchain Service

The release of Microsoft Azure Blockchain Workbench 1.7.0, which along with our new Azure Blockchain Service, can further enhance your blockchain development and projects. You can deploy a new instance of Blockchain Workbench through the Azure portal or upgrade your existing deployments to 1.7.0 using the upgrade script. This update includes the improvements such as integration with Azure Blockchain Service, and enhance compatibility with Quorum.

New PCI DSS Azure Blueprint makes compliance simpler

Announcing our second Azure Blueprint for an important compliance standard with the release of the PCI-DSS v3.2.1 blueprint. The new blueprint maps a core set of policies for Payment Card Industry (PCI) Data Security Standards (DSS) compliance to any Azure deployed architecture, allowing businesses such as retailers to quickly create new environments with compliance built in to the Azure infrastructure. Azure Blueprints is a free service that enables customers to define a repeatable set of Azure resources that implement and adhere to standards, patterns, and requirements. Azure Blueprints allow customers to set up governed Azure environments that can scale to support production implementations for large-scale migrations.

Now in preview

Event-driven analytics with Azure Data Lake Storage Gen2

Announcing that Azure Data Lake Storage Gen2 integration with Azure Event Grid is in preview. This means that Azure Data Lake Storage Gen2 can now generate events that can be consumed by Event Grid and routed to subscribers with webhooks, Azure Event Hubs, Azure Functions, and Logic Apps as endpoints. With this capability, individual changes to files and directories in Azure Data Lake Storage Gen2 can automatically be captured and made available to data engineers for creating rich big data analytics platforms that use event-driven architectures.

Technical content

How to deploy your machine learning models with Azure Machine Learning

Azure Machine Learning service is a cloud service that you use to train, deploy, automate, and manage machine learning models, all at the broad scale that the cloud provides. The service fully supports open-source technologies such as PyTorch, TensorFlow, and scikit-learn and can be used for any kind of machine learning, from classical ml to deep learning, supervised and unsupervised learning. In this article you will learn to deploy your machine learning models with Azure Machine Learning.

Illustration of a cloud

Azure Cloud Shell Tips for SysAdmins Part II - Using the Cloud Shell tools to Migrate

In the last blog post Azure Cloud Shell Tips for SysAdmins (bash) the author discussed some of the tools that the Azure Cloud Shell for bash already has built into it.  This time he goes deeper and show you how to utilize a combination of the tools to create an UbuntuLTS Linux server.  Once the server is provisioned, he will demonstrate how to use Ansible to deploy Node.js from the nodesource binary repository.

Step-By-Step: Migrating The Active Directory Certificate Service From Windows Server 2008 R2 to 2019

End of support for Windows Server 2008 R2 has been slated by Microsoft for January 14th 2020.  Said announcement increased interest in a previous post detailing steps on Active Directory Certificate Service migration from server versions older than 2008 R2.  Many subscribers of ITOpsTalk.com have reached out asking for an update of the steps to reflect Active Directory Certificate Service migration from 2008 R2 to 2016 / 2019 and of course our team is happy to oblige.

Home Grown IoT - Local Dev

Now that we’re starting to build our IoT application it’s time to start talking about the local development experience for the application. At the end of the day I use IoT Edge to do the deployment onto the device and manage the communication with IoT Hub and there is a very comprehensive development guide for Visual Studio Code and Visual Studio 2019. The workflow of this is to create a new IoT Edge project, setup IoT Edge on your machine and do deployments to it that way. This is the way I’d recommend going about it yourself as it gives you the best replication of production and local development.

Delivering static content via Azure CDN | Azure Friday

In one of the prior episodes we learned how to serve a static website from Azure's blob storage<?XML:NAMESPACE PREFIX = "[default] http://www.w3.org/2000/svg" NS = "http://www.w3.org/2000/svg" /> . This is great for a low volume web site. As your site starts getting more hits, you wanted to deliver the content closer to the end user. In this episode, we will learn how to deliver a static content via Azure Content Delivery Network(CDN). Azure CDN offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world.

Azure shows

Deploy your web app in Windows containers on Azure App Service | Azure Friday

Windows Container support is available in preview in Azure App Service. By deploying applications via Windows Containers in Azure App Service you can install your dependencies inside the container, call APIs currently blocked by the Azure App Service sandbox and use the power of containers to migrate applications for which you no longer have the source code. All of this and you still get to use the awesome feature set enabled by Azure App Service such as auto-scale, deployment slots and increased developer productivity.

Using open data to build family trees | The Open Source Show

Erica Joy joins Ashley McNamara to share her not-so-secret personal mission: making genealogy information open, queryable, and easily parsable. She shares a bit about why this is so critical, common challenges, and tips for re-building your own family tree - or using open data to uncover whatever the information you need for your personal mission.

Supporting Windows forms and WPF in .NET Core 3 | On .NET

There is significant effort happening to add support for running desktop applications on .NET Core 3.0. In this episode, Jeremy interviews Mike Harsh about some of the work being done and decisions being made to enable Windows Forms and WPF applications to run well on .NET Core 3.0 and beyond.

Five things about RxJS and reactive programming | Five Things

Where do RxJS, Reactive Programming and the Redux pattern fit into your developer workflow? Where can you learn form the community leaders? Does wearing a hoodie make you a better developer? Oh and remember, go to RxJS Live and drinks are on Aaron!

How to use the Global Search in the Azure portal | Azure Portal Series

In this video of the Azure Portal “How To” Series, you will learn how to find Azure services, resources, documentation, and more using the Global Search in the Azure portal.

Episode 285 – The Azure Journey | The Azure Podcast

Sujit, Kendall, and Cynthia talk with the one and only Richard Campbell on how to tell the cloud story, the conversations to have with customers as they enter the cloud and the implications of globally distributed cloud that needs to be considered. Probably one of our favorite shows.

Industries and partners

Solving the problem of duplicate records in healthcare

As the U.S. healthcare system continues to transition away from paper to more a digitized ecosystem, the ability to link an individual’s medical data together correctly becomes increasingly challenging. Patients move, marry, divorce, change names and visit multiple providers throughout their lifetime, with each visit creating new records, and the potential for inconsistent or duplicate information grows. Duplicate medical records often occur as a result of multiple name variations, data entry errors, and lack of interoperability—or communication—between systems. Poor patient identification and duplicate records in turn lead to diagnosis errors, redundant medical tests, skewed reporting and analytics, and billing inaccuracies. The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we will describe how one Microsoft partner, Nextgate, uses Azure to solve a unique problem.

Diagram image of Sunlight's solution using static elements and letting user configure with dynamic parts

A solution to manage policy administration from end to end

Legacy systems can be a nightmare for any business to maintain. In the insurance industry, carriers struggle not only to maintain these systems but to modify and extend them to support new business initiatives. The insurance business is complex, every state and nation has its own unique set of rules, regulations, and demographics. Creating new products such as an automobile policy has traditionally required the coordination of many different processes, systems, and people. These monolithic systems traditionally used to create new products are inflexible and creating a new product can be an expensive proposition. The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner, Sunlight Solutions, uses Azure to solve a unique problem.

Using natural language processing to manage healthcare records

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how SyTrue, a Microsoft partner focusing on healthcare uses Azure to empower healthcare organizations to improve efficiency, reduce costs, and improve patient outcomes.

Azure Cosmos DB: A competitive advantage for healthcare ISVs

CitiusTech is a specialist provider of healthcare technology services which helps its customers to accelerate innovation in healthcare. CitiusTech used Azure Cosmos DB to simplify the real-time collection and movement of healthcare data from variety of sources in a secured manner. With the proliferation of patient information from established and current sources, accompanied with scrupulous regulations, healthcare systems today are gradually shifting towards near real-time data integration.

Improving the Office app experience in virtual environments

Configuring a Server-side Blazor app with Azure App Configuration

$
0
0

With .NET Core 3.0 Preview 6, we added authentication & authorization support to server-side Blazor apps. It only takes a matter of seconds to wire up an app to Azure Active Directory with support for single or multiple organizations. Once the project is created, it contains all the configuration elements in its appsettings.json to function. This is great, but in a team environment – or in a distributed topology – configuration files lead to all sorts of problems. In this post, we’ll take a look at how we can extract those configuration values out of JSON files and into an Azure App Configuration instance, where they can be used by other teammates or apps.

Setting up Multi-org Authentication

In the .NET Core 3.0 Preview 6 blog post we explored how to use the Individual User Accounts option in the authentication dialog to set up a Blazor app with ASP.NET Identity, so we won’t go into too much detail. Essentially, you click the Change link during project creation.

Click Change Auth during project creation

In this example I’ll be using an Azure Active Directory application to allow anyone with a Microsoft account to log into the app, so I’ll select Work or School Accounts and then select Cloud – Multiple Organizations in the Change Authentication dialog.

The Visual Studio add authentication dialog.

Once the project is created, my AzureAD configuration node contains the 3 key pieces of information my app’s code will need to authenticate against Azure Active Directory; my tenant URL, the client ID for the AAD app Visual Studio created for me during the project’s creation, and the callback URI so users can get back to my app once they’ve authenticated.

The appsettings.json inclusive of the settings.

Whilst this is conveniently placed here in my appsettings.json file, it’d be more convenient if I didn’t need any local configuration files. Having a centralized configuration-management solution would be easier to manage, as well as give me the ability to keep my config out of source control, should there come a point when things like connection strings need to be shared amongst developers.

Azure App Configuration

Azure App Configuration is a cloud-based solution for managing all of your configuration values. Once I have an Azure App Configuration instance set up in my subscription, adding the configuration settings is simple. By default, they’re hidden from view, but I can click Show Values or select an individual setting for editing or viewing.

The config values in Azure App Configuration

Convenient .NET Core IConfiguration Integration

The Azure App Configuration team has shipped a NuGet package containing extensions to ASP.NET and .NET Core that enable developers the ability of using the service, but without needing to change all your code that already makes use of IConfiguration. To start with, install the Microsoft.Extensions.Configuration.AzureAppConfiguration NuGet package.

Adding the NuGet Package for Azure App Configuration

You’ll need to copy the connection string from the Azure Portal to enable connectivity between your app and Azure App Configuration.

Copying the Azure App Configuration connection string

Once that value has been copied, you can use it with either dotnet user-secrets to configure your app, or using a debug-time environment variable. Though it seems like we’ve created yet one more configuration value to track, think about it this way: this is the only value you’ll have to set using an environment variable; all your other configuration can be set via Azure App Configuration in the portal.

Setting up the Azure App Configuration connection string in an environment variable

Using the Azure App Configuration Provider for .NET Core

Once the NuGet package is installed, the code to instruct my .NET Core code to use Azure App Configuration whenever it reads any configuration values from IConfiguration is simple. In Program.cs I’ll call the ConfigureAppConfiguration middleware method, then use the AddAzureAppConfiguration extension method to get the connection string from my ASPNETCORE_AzureAppConfigConnectionString environment variable. If the environment variable isn’t set, the call will noop and the other configuration providers will do the work.

This is great, because I won’t even need to change existing – or in this case, template-generated code – I just tell my app to use Azure App Configuration and I’m off to the races. The full update to Program.cs is shown below.

// using Microsoft.Extensions.Configuration.AzureAppConfiguration;

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
        .ConfigureAppConfiguration((hostingContext, config) =>
        {
            config.AddAzureAppConfiguration(options =>
            {
                var azureAppConfigConnectionString = 
                    hostingContext.Configuration["AzureAppConfigConnectionString"];
                options.Connect(azureAppConfigConnectionString);
            });
        })
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseStartup<Startup>();
        });

When I run the app, it first reaches out to Azure App Configuration to get all the settings it needs to run and then works as if it were configured locally using appsettings.json. As long as my teammates or other services needing these values have the connection string to the Azure App Configuration instance holding the settings for the app, they’re good.

Running the authenticated app

Now, I can remove the configuration values entirely from the appsettings.json file. If I want to control the logging behavior using Azure App Configuration, I could move these left-over settings out, too. Even though I’ll be using Azure App Configuration as, the other providers are still there.

The appsettings.json with the settings removed.

Dynamic Re-loading

Log levels are a good example of how the Azure App Configuration service can enable dynamic reloading of configuration settings you might need to tweak frequently. By moving my logging configuration into Azure App Configuration, I can change the log level right in the portal. In Program.cs, I can use the Watch method to specify which configuration settings I’ll want to reload when they change.

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
        .ConfigureAppConfiguration((hostingContext, config) =>
        {
            config.AddAzureAppConfiguration(options =>
            {
                var azureAppConfigConnectionString = 
                    hostingContext.Configuration["AzureAppConfigConnectionString"];
                options.Connect(azureAppConfigConnectionString)
                    .Watch("Logging:LogLevel:Default")
                    .Watch("Logging:LogLevel:Microsoft")
                    .Watch("Logging:LogLevel:Microsoft.Hosting.Lifetime");
            });
        })
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseStartup<Startup>();
        });

The default load-time is 30 seconds, but now, should I need to turn up the volume on my logs to get a better view of what’s happening in my site, I don’t need to re-deploy or even stop my site. Simply changing the values in the portal will be enough – 30 seconds later the values will be re-loaded from Azure App Configuration and my logging will be more verbose.

Changing configuration values in the portal

Configuration Source Ordering

The JsonConfigurationSource configuration sources – those which load settings from appsettings.json and appsettings.{Environment}.json – are loaded during the call to CreateDefaultBuilder. So, by the time I call AddAzureAppConfiguration to load in the AzureAppConfigurationSource, the JSON file providers are already in the configuration sources list.

The importance of ordering is evident here; should I want to override the configuration values coming from Azure App Configuration with my local appsettings.json or appsettings.Development.json files, I’d need to re-order the providers in the call to ConfigureAppConfiguration. Otherwise, the JSON file values will be loaded first, then the last source (the one that will “win”) will be the Azure App Configuration source.

Try it Out

Any multi-node or microservice-based application topology benefits from centralized configuration, and teams benefit from it by not having to keep track of so many configuration settings, environment variables, and so on. Take a look over the Azure App Configuration documentation. You’ll see that there are a multitude of other features, like Feature Flags and dark deployment support. Then, create an instance and try wiring your existing ASP.NET Code up to read configuration values from the cloud.

The post Configuring a Server-side Blazor app with Azure App Configuration appeared first on ASP.NET Blog.

Azure FXT Edge Filer now generally available

$
0
0

Scaling and optimizing hybrid network-attached storage (NAS) performance gets a boost today with the general availability of the Microsoft Azure FXT Edge Filer, a caching appliance that integrates on-premises network-attached storage and Azure Blob Storage. The Azure FXT Edge Filer creates a performance tier between compute and file storage and provides high-throughput and low-latency network file system (NFS) to high-performance computing (HPC) applications running on Linux compute farms, as well as the ability to tier storage data to Azure Blob Storage.

Fast performance tier for hybrid storage architectures

The availability of Azure FXT Edge Filer today further integrates the highly performant and efficient technology that Avere Systems pioneered to the Azure ecosystem. The Azure FXT Edge Filer is a purpose-built evolution of the popular Avere FXT Edge Filer, in use globally to optimize storage performance in read-heavy workloads.

The new hardware model goes beyond top-line integration with substantial updates. It is now being manufactured by Dell and has Image of a one node Azure FXT Edge Filer hardware unit.been upgraded with twice as much memory and 33 percent more SSD. Two models with varying specifications are available today. With the new 6600 model, customers will see about a 40 percent improvement in read performance over the Avere FXT 5850. The appliance now supports hybrid storage architectures that include Azure Blob storage.

Edge filer hardware is recognized as a proven solution for storage performance improvements. With many clusters deployed around the globe, Azure FXT Edge Filer can scale performance separately from capacity to optimize storage efficiency. Companies large and small use the appliance to accelerate challenging workloads for processes like media rendering, financial simulations, genomic analysis, seismic processing, and wide area network (WAN) optimization. Now with new Microsoft Azure supported appliances, these workloads can run with even better performance and easily leverage Azure Blob storage for active archive storage capacity.

Rendering more faster

Visual effects studios have been long-time users of this type of edge appliance, as their rendering workloads frequently push storage infrastructures to their limits. When one of these companies, Digital Domain, heard about the new Azure FXT Edge Filer hardware, they quickly agreed to preview a 3-node cluster.

“I’ve been running my production renders on Avere FXT clusters for years and wanted to see how the new Azure FXT 6600 stacks up. Setup was easy as usual, and I was impressed with the new Dell hardware. After a week of lightweight testing, I decided to aim the entire render farm at the FXT 6600 cluster and it delivered the performance required without a hiccup and room to spare.”

Mike Thompson, Principal Engineer, Digital Domain

Digital Domain has nine locations in the United States, China, and India.

Manage heterogeneous storage resources easily

Azure FXT Edge Filers help keep analysts, artists, and engineers productive, ensuring that applications aren’t affected by storage latency. And storage administrators can easily manage these heterogeneous pools of storage in a single file system namespace and through a single mountpoint. Users access their files from a single mount point, whether they are stored in on-premises NAS or in Azure Blob storage.

Expanding a cluster to meet growing demands is as easy as adding additional nodes. The Azure FXT Edge Filer scales from three to 24 nodes, allowing even more productivity in peak periods. This scale helps companies avoid overprovisioning expensive storage arrays and enables moving to the cloud at the user’s own pace.

Gain low latency hybrid storage access

Azure FXT Edge Filers deliver high throughput and low latency for hybrid storage infrastructure supporting read-heavy HPC workloads. Azure FXT Edge Filers support storage architectures with NFS and server message block (SMB) protocol support for NetApp and Dell EMC Isilon NAS systems, as well as cloud APIs for Azure Blob storage and Amazon S3.

Customers are using the flexibility of the Azure FXT Edge Filer to move less frequently used data to cloud storage resources, while keeping files accessible with minimal latency. These active archives enable organizations to quickly leverage media assets, research, and other digital information as needed.

Enable powerful caching of data

Software on the Azure FXT Edge Filers identifies the most in-demand or hottest data and caches it closest to compute resources, whether that data is stored down the hall, across town, or across the world. With a cluster connected, the appliances take over, moving data as it warms and cools to optimize access and use of the storage.

Get started with Azure FXT Edge Filers

Whether you are currently running Avere FXT Edge Filers and are looking to upgrade to the latest hardware to increase performance or expanding your clusters or you are new to the technology, the process to get started is the same. You can request information by completing this online form or by reaching out to your Microsoft representative.

Microsoft will work with you to configure the optimal combination of software and hardware for your workload and facilitate its purchase and installation.

Resources

Azure FXT Edge Filer preview blog

Azure FXT Edge Filer product information

Azure FXT Edge Filer documentation

Azure FXT Edge Filer data sheet

Announcing Azure DevOps Server 2019 Update 1 RC1

$
0
0

Today, we are announcing the release of Azure DevOps Server 2019 Update 1 RC1. Azure DevOps Server, formerly known as Team Foundation Server or TFS, is a self-hosted package that customers can run in their own environment, on-premises, or inside VMs on the cloud and includes all of the Azure DevOps services: Pipelines, Boards, Repos, Artifacts and Test Plans. It is designed for customers who aren’t ready to move to our cloud-based Azure DevOps Services yet and have the need for the additional control of a self-managed solution.

Azure DevOps Server 2019 Update 1 RC1 is a go-live release, meaning you can install it on production servers. We expect to have another Release Candidate release before our final release.

Here are some key links:

We’ve added a ton of new features which you can read about in our release notes. We’d like to highlight some of these features:

Analytics extension no longer needed to use Analytics

Analytics is increasingly becoming an integral part of the Azure DevOps experience. It is an important capability for customers to help them make data driven decisions. For Update 1, we’re excited to announce that customers no longer need an extension to use Analytics. Customers can now enable Analytics inside the Project Collection Settings. New collections created in Update 1 and Azure DevOps Server 2019 collections with the Analytics extension installed that were upgraded will have Analytics enabled by default. You can find more about enabling Analytics in the documentation.

New Basic process

Some teams would like to get started quickly with a simple process template. The new Basic process provides three work item types (Epics, Issues, and Tasks) to plan and track your work.

Accept and execute on issues in GitHub while planning in Azure Boards

You can now link work items in Azure Boards with related issues in GitHub. Your team can continue accepting bug reports from users as issues within GitHub but relate and organize the team’s work overall in Azure Boards.

Pull Request improvements

We’ve added a bunch of new pull request features in Azure Repos. You can now automatically queue expired builds so PRs can autocomplete. We have added support for Fast-Forward and Semi-Linear merging when completing PRs. You can also filter by the target branch when searching for pull requests to make them easier to find.

Simplified YAML editing in Pipelines

We continue to receive feedback asking to make it easier to edit YAML files for Pipelines. In this release, we have added a web editor with IntelliSense to help you edit YAML files in the browser. We have also added a task assistant that supports most of the common task input types, such as pick lists and service connections.

Test result trend (Advanced) widget

The Test result trend (Advanced) widget displays a trend of your test results for your pipelines or across pipelines. You can use it to track the daily count of test, pass rate, and test duration.

Azure Artifacts improvements

This release has several improvements in Artifacts, including support for Python Packages and upstream sources for Maven. Also, Maven, npm, and Python package types are now supported in Pipeline Releases.

Wiki features

There are several new features for the wiki, including permalinks for the wiki pages, @mention for users and groups, support for HTML tags, and markdown templates for formulas and videos. You can also include work item status in a wiki page and can follow pages to get notified when the page is edited, deleted or renamed.

We’d love for you to install this release candidate and provide any feedback via Twitter to @AzureDevOps or in our Developer Community.

The post Announcing Azure DevOps Server 2019 Update 1 RC1 appeared first on Azure DevOps Blog.

Updates on Microsoft’s R Roadmap in Azure

$
0
0

Yesterday, Microsoft's AI Customer Engineering Team posted the first in a series of blog posts on the state and future of support for R in Azure. Check out that post for some details on the forthcoming capabilities to support R and Python-based deployments in the Azure cloud service. 

The post references this guide to the machine learning services in Azure, along with their supported languages. Services that currently support R include Azure Machine Learning Studio, SQL Server Microsoft Machine Learning Service, Microsoft Machine Learning Server, Azure Data Science Virtual Machine, Azure Databricks, and more.

The post also notes that Microsoft is committed to bring R support to Azure Machine Learning Services. (I'll have further news about that in my talk at the useR conference in Toulouse.)

Future posts in the series will cover:

  • A deeper dive on R operationalization options
  • Big Data architectural topologies with R
  • Orchestrating mixed R and Python Machine Learning pipelines with Azure Machine Learning Services

I'll provide links to those posts on this blog when they're published as well.

Microsoft Tech Communities: Understanding your R strategy options on the Azure AI Platform


Highlights from SIGMOD 2019: New advances in database innovation

$
0
0

The emergence of the cloud and the edge as the new frontiers for computing is an exciting direction—data is now dispersed within and beyond the enterprise, on-premises, in the cloud, and at the edge. We must enable intelligent analysis, transactions, and responsible governance for data everywhere, from creation through to deletion (through the entire lifecycle of ingestion, updates, exploration, data prep, analysis, serving, and archival).

Our commitment to innovation is reflected in our unique collaborative approach to product development. Product teams work in synergy with research and advanced development groups, including Cloud Information Services Lab, Gray Systems Lab, and Microsoft Research, to push boundaries, explore novel concepts and challenge hypotheses.

The Azure Data team continues to lead the way in on-premises and cloud-based database management. SQL Server has been identified as the top DBMS by Gartner for four consecutive years.  Our aim is to re-think and redefine data management by developing optimal ways to capture, store and analyze data.

I’m especially excited that this year we have three teams presenting their work: “Socrates: The New SQL Server in the Cloud,” “Automatically Indexing Millions of Databases in Microsoft Azure SQL Database,” and the Gray Systems Lab research team’s “Event Trend Aggregation Under Rich Event Matching Semantics.” 

The Socrates paper describes the foundations of Azure SQL Database Hyperscale, a revolutionary new cloud-native solution purpose-built to address common cloud scalability limits. It enables existing applications to elastically scale without fixed limits without the need to rearchitect applications, and with storage up to 100TB.

Its highly scalable storage architecture enables a database to expand on demand, eliminating the need to pre-provision storage resources, providing flexibility to optimize performance for workloads. The downtime to restore a database or to scale up or down is no longer tied to the volume of data in the database and database point-in-time restores are very fast, typically in minutes rather than hours or even days. With read-intensive workloads, Hyperscale provides rapid scale-out by provisioning additional read replicas instantaneously without any data copy needed.

Learn more about Azure SQL Database Hyperscale.

Azure SQL Database also introduced a new serverless compute option: Azure SQL Database serverless. Serverless allows compute and memory to scale independently and on-demand based on the workload requirements. Compute is automatically paused and resumed, eliminating the requirements of managing capacity and reducing cost, and is an efficient option for applications with unpredictable or intermittent compute requirements.

Learn more about Azure SQL Database serverless.

Index management is a challenging task even for expert human administrators. The ability to create efficiencies and fully automate the process is of critical significance to business, as discussed in the Data team’s presentation on the auto-indexing feature in Azure SQL Database.

This, coupled with the identification of how to achieve optimal query performance for complex real-world applications, underpins the auto-indexing feature.

The auto-indexing feature is generally available and generates index recommendations for every database in Azure SQL Database. If the customer chooses, it can automatically implement index changes on their behalf and validate these index changes to ensure that performance improves. This feature has already significantly improved the performance of hundreds of thousands of databases.

Discover the benefits of the auto-tuning feature in Azure SQL Database.

In the world of streaming systems, the key challenges are supporting rich event matching semantics (e.g. Kleene patterns to capture event sequences of arbitrary lengths), and scalability (i.e. controlling memory pressure and latency at very high event throughputs). 

The advanced research team focused on supporting this class of queries at a very high scale and compiled their findings in Event Trend Aggregation Under Rich Event Matching Semantics. The key intuition is to incrementally maintain the coarsest grained aggregates that can support a given query semantics, enabling control of memory pressure and attainment of very good latency at scale. By carefully implementing this insight, a research prototype was built that achieves six orders of magnitude speed-up and up to seven orders of magnitude memory reduction compared to state-of-the-art approaches.

Microsoft has the unique advantage of a world-class data management system in SQL Server and a leading public cloud in Azure. This is especially exciting at a time when cloud-native architectures are revolutionizing database management.

There has never been a better time to be part of database systems innovation at Microsoft, and we invite you to explore the opportunities to be part of our team.

Enjoy SIGMOD 2019; it’s a fantastic conference! 

Automate MLOps workflows with Azure Machine Learning service CLI

$
0
0

This blog was co-authored by Jordan Edwards, Senior Program Manager, Azure Machine Learning

This year at Microsoft Build 2019, we announced a slew of new releases as part of Azure Machine Learning service which focused on MLOps. These capabilities help you automate and manage the end-to-end machine learning lifecycle.

Image with reference to the title "Automate MLOps workflows with Azure Machine Learning service CLI"

Historically, Azure Machine Learning service’s management plane has been via its Python SDK. To make our service more accessible to IT and app development customers unfamiliar with Python, we have delivered an extension to the Azure CLI focused on interacting with Azure Machine Learning.

While it’s not a replacement for the Azure Machine Learning service Python SDK, it is a complimentary tool that is optimized to handle highly parameterized tasks which suit themselves well to automation. With this new CLI, you can easily perform a variety of automated tasks against the machine learning workspace, including:

  • Datastore management
  • Compute target management
  • Experiment submission and job management
  • Model registration and deployment

Combining these commands enables you to train, register their model, package it, and deploy your model as an API. To help you quickly get started with MLOps, we have also released a predefined template in Azure Pipelines. This template allows you to easily train, register, and deploy your machine learning models. Data scientists and developers can work together to build a custom application for their scenario built from their own data set.

The Azure Machine Learning service Command-Line Interface is an extension to the interface for the Azure platform. This extension provides commands for working with Azure Machine Learning service from the command-line and allows you to automate your machine learning workflows. Some key scenarios would include:

  • Running experiments to create machine learning models
  • Registering machine learning models for customer usage
  • Packaging, deploying, and tracking the lifecycle of machine learning models

To use the Azure Machine Learning CLI, you must have an Azure subscription. If you don’t have an Azure subscription, you can create a free account before you begin. Try the free or paid version of Azure Machine Learning service to get started today.

Next steps

Learn more about the Azure Machine Learning service.

Get started with a free trial of the Azure Machine Learning service.

Visual Studio Code June 2019

Top Stories from the Microsoft DevOps Community – 2019.07.05

$
0
0

This week is a holiday week in the US, but this community is still going strong! Below are the highlights of some of the great overviews the Azure DevOps community has published. Enjoy the holiday weekend!

YAML-defined CI/CD for ASP .NET Core
This is a detailed walkthrough of the new process of configuring a YAML–based CI/CD pipeline for a .NET Core application in Azure Pipelines. It covers the Build, Unit Testing and Deployment stages, as well as additional tips and tricks. Big thanks to Shahed Chowdhuri for putting it all together.

Work Item Query Macros in Azure DevOps
This post is a great overview of the Macros available for Azure Boards Work Item queries, and how they can simplify the process of retrieving the data you need. Thanks to Dave Lloyd for creating this summary!

Azure Pipelines: Featuring Bash, YAML, JFrog and a custom Slack app!
This is a fun writeup from Jessica Dean on making custom Slack notifications for JFrog Artifactory and XRay using bash script tasks in Azure Pipelines, allowing you to style the Slack notifications.

How to delete content in Azure DevOps wiki
Have you ever needed to delete the entire Azure DevOps Wiki, rather than updating or deleting specific articles? This blog walks you through getting it done. Thanks to Ricci Gian Maria for putting it together!

Before you know, it is in production
And here is one more post about the Global DevOps Bootcamp, where Rob Bos walks us through the use of Selenium and Pipelines to implement functionality that was missing in the Eventbrite API.

If you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.07.05 appeared first on Azure DevOps Blog.

Real World Cloud Migrations: Moving a 17 year old series of sites from bare metal to Azure

$
0
0

Technical Debt has a way of sneaking up on you. While my podcast site and the other 16ish sites I run all live in Azure and have a nice CI/CD pipeline with Azure DevOps, my main "Hanselman.com" series of sites and mini-sites has lagged behind. I'm still happy with its responsive design, but the underlying tech has started to get more difficult to manage and build and I've decided it's time to make some updates.

Moving sites to Azure DevOps

I want to be able to make these updates and have a clean switch over so that you, the reader, don't notice a difference. There's a number of things to think about when doing any migration like this, realizing it'll take some weeks (or months if you're a bigger company that just me).

  • Continuous Deployment/Continuous Integration
    • I host my code on GitHub and Azure DevOps now lets you log in with GitHub and does a fine job of building AND deploying your code (while running tests AND allowing for manual quality gates) so I want to make sure my sites have a nice clean "check in and go live" process.
    • I'll also be using Azure App Services and Deployment Slots, so I'll have a dev/test/staging site and production, like a real professional. No more editing text files in production. Well, at least, I won't tell you when I'm editing text file in production.
  • Technology Update
    • Hanselman.com proper (not the blog) and the mini pages/sites underneath it run on ASP.NET 4.0 and WebForms. I was able to easily move the main site over to ASP.NET Razor Pages. Razor is just so elegant, as it's basically just HTML then you type @ and you're in C# (Razor). More on that below, but the upgrade was a day as the home page and minisites are largely readonly.
    • The Blog, hosted at /blog will be more challenging given I don't want to break two decades years of URLs, along with the fact that it's running DasBlog on a recently upgraded .NET 4.0. DasBlog was originally made in .NET 1, then upgraded to .NET 2, so this is 17 years of technical debt.
    • That said, the .NET Standard along with open source cross-platform .NET Core has allowed us - with the leadership of Mark Downie - to create DasBlog Core. DasBlog Core shares the core reliable (if crusty) engine of DasBlog along with an all new system of URL writing using ASP.NET Core middleware, as well as a complete re-do of the (well ahead of its time) DasBlog Theming Engine, now based on Razor Pages. It's brilliant. This is in active development.
  • Azure Front Door
    • Because I'm moving from a single machine running IIS to Azure, I'll want to split things apart to remove single points of failture. I'll use Azure Front Door to manage my URL structure and act as a front end cache as well as distribute traffic to multiple Azure App Services (Web Apps).
  • URL management
    • Are you changing your URLs and URL structure? Remember that URLs are UI and they matter. I've long wanted to remove the "aspx" extension from my URLs, as well as move the TitleCaseBlogPostThing to a more "modern" title-case-blog-post-thing style. I need to do this in a way that updates my google sitemap, breaks zero URLs, 301 redirects to the new style, and uses rel=canonical in a smart way.
  • Shared Assets/CDNs/Front Door
    • Since I run a family of sites, there's an opportunity to use a CDN as well and some clean CNAME DNS such that images.hanselman.com and images.hanselminutes.com can share assets. Since the Azure CDN is easy to setup and offers free SSL certs and pay-as-you go, I'll set both of those CNAMES up to point to the same Azure Storage where I'll keep images, show pics, CSS, and JS.

I'll be blogging the whole process. What do you want to hear/learn about?


Sponsor: Seq delivers the diagnostics, dashboarding, and alerting capabilities needed by modern development teams - all on your infrastructure. Download now.



© 2019 Scott Hanselman. All rights reserved.
     
Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>