Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

First Midwest Bank credits Microsoft 365 for a workplace transformation that drives productivity


Connect & Collaborate with the Bing Maps team at Microsoft Inspire

$
0
0

The Bing Maps team will be at Microsoft Inspire 2019, July 14th through the 18th, in Las Vegas, Nevada. If you are registered for the event, stop by the Bing Maps booth, to learn more about our services and business development opportunities.

Bing Maps APIs is a Microsoft Cloud Service mapping platform that offers enterprise-grade, intelligent, geospatial mapping and visualization solutions, from mobile apps to CRM to fleet management services and more. Visit our Developers page to explore our multiple API options, including REST Services, Windows Store apps control, WPF control, Bing Maps V8 Control, Spatial Data Services and Fleet Management Services (Distance Matrix API, Isochrone API, Snap-To-Road API, Truck Routing API and our newly announced Multi-Itinerary Optimization API).

Microsoft Inspire 2019

The Bing Maps APIs platform offers a wide range of developer-friendly flexible licensing options and you can try it for free. Learn more about our basic and enterprise key licensing options.

To learn more about how you can develop and transform your apps with maps, visit the Bing Maps for Enterprise site, and join the Bing Maps APIs LinkedIn and follow the Bing Maps Twitter page for news and updates.

- Bing Maps Team

What’s better than ILDasm? ILSpy and dnSpy are tools to Decompile .NET Code

$
0
0

.NET code (C#, VB, F#, etc) compiles (for the most part) into Intermediate Language (IL) and then makes it way to native code usually by Just-in-time (JIT) compilation on the target machine. When you get a DLL/Assembly, it's pre-chewed but not full juiced, to mix my metaphors.

Often you'll come along a DLL that you want to learn more about. Sometimes you'll want to just see the structure of classes, methods, etc, and other times you want to see the IL - or a close representation of the original C#/VB/F#, etc. You're not looking at the source, you're seeing a backwards projection of the IL as whatever language you want. You're basically taking this pre-chewed food and taking it out of your mouth and getting a decent idea of what it was originally.

I've used ILDasm for years, but it's old and lame and people tease you for using it because they are cruel. ;)

Seriously, though, I use ILDasm - the IL Disassembler - simply because it's already installed. Those tweets got me thinking though that I need to update my options, so I'm trying out ILSpy and dnSpy.

ILSpy

ILSpy has been around for a while and has multiple front-ends, including ones for Linux/Mac/Windows based on Avalonia in the form of AvaloniaSpy. You can also integrate ILSpy into Visual Studio 2017 or 2019 with this extension. There is also a console decompiler and, interestingly, cross-platform PowerShell cmdlets.

ILSpy is a solid .NET decompiler

I've always liked the "Open List" feature of ILSpy where you can open a preconfigured list of assemblies you want to browse, like ASP.NET MVC, .NET 4, etc. A fun open source contribution for you might be to update the included lists with newer defaults. There's so many folks doing great work in open source out there, why not jump in and help them out?

dnSpy

dnSpy has a lovely UI AND a great Console app using the same engine. It's amazingly polished and VERY complete. I was surprised that it also has a full hex editor as well as property pages for common EXE file headers. From their GitHub, dnSpy features

  • Debug .NET Framework, .NET Core and Unity game assemblies, no source code required
  • Edit assemblies in C# or Visual Basic or IL, and edit all metadata
  • Light and dark themes
  • Extensible, write your own extension
  • High DPI support (per-monitor DPI aware)

dnSpy takes it to the next level with an integrated Debugger, meaning you can attach to a running process and debug it without source code - but it feels like source code because it's decompiling for you. Note where it says C#, I can choose C#, VB, or IL as a "view" on my decompiled code.

dnSpy is amazing for looking inside .NET apps

Here is dnSpy actually debugging ILSpy and stopped at a decompiled breakpoint.

image

There's a lot of great low-level stuff in this space. Another cool tool is Reflexil, a .NET Assembly Editor as well as de4dot by the same mysterious author as dnSpy. Commercial Tools include Reflector and JustDecompile.

What's your favorite?


Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.



© 2018 Scott Hanselman. All rights reserved.
     

What’s new in Azure DevOps Sprint 152

$
0
0

Sprint 152 has just finished rolling out to all organisations and you can check out all the cool features in the release notes. Here are just some of the features that you can start using today.

Wiki improvements

Wiki has a new look and performance has been improved. For instance, the page navigation tree has been moved to the left to provide a natural data flow from left to right. Also, until now, the amount of vertical space was constrained by the header. With this update, we made the page scroll fully so that you get a lot more vertical space for your content. We’ve also added support for several HTML tags to create richer content such as collapsible sections, add figure captions to your images, and highlighting text.

Table creation and editing experience has just got better! You can now quickly create tables without worrying about the syntax, and autoformat markdown tables for a better readability experience.

Several new commands in the Azure DevOps CLI

With these several new commands, you can quickly and easily create and manage pipelines, kick-off a build and tag the build at the same time, manage users, extensions, and invoke REST APIs straight from the command line.

  • az pipelines
  • az pipelines build tag
  • az devops user
  • az devops extension
  • az devops invoke

These are just the tip of the iceberg, and there are plenty more features that we’ve released in Sprint 152. Check out the full list of features for this sprint in the release notes.

The post What’s new in Azure DevOps Sprint 152 appeared first on Azure DevOps Blog.

Building a better asset and risk management platform with elastic Azure services

$
0
0

Elasticity means services can expand and contract on demand. This means Azure customers who are on a pay-as-you-go plan will reap the most benefit out of Azure services. Their service is always available, but the cost is kept to a minimum. This feature is so important, one Microsoft partner is using it as a point of differentiation.

Modular and elastic benefits

A key attribute of Azure is the interchangeable nature of services. Together with elasticity, Azure lets modern enterprises migrate and evolve more easily. For financial service providers, the modular approach lets customers benefit from best-of-breed analytics and in these areas:

  • Risk and performance analytics: Azure Data Lake Storage, DataBricks, and Azure Stream Analytics are just a few of the options for calculating risk.
  • Regulatory compliance automation (regtech): Automating compliance using Azure DevOps or using a service provider such as CloudNeeti simplifies an arduous task.
  • Investment management technology: Azure Virtual Machines or Azure Functions are just two options for managing investment portfolios.

With these capabilities, asset managers can build superior products that generate higher returns for their clients.

Financial services is a tough market

Competition in the asset management industry has ramped for active vs. active, passive vs. active, and passive vs. passive, while margins are shrinking. At the same time, costly, outdated, and difficult-to-maintain legacy systems and technology are impacting both costs and operational efficiencies, putting a further drag on performance, while also making it difficult to scale. A new Microsoft partner, Axioma, is helping its clients in the financial services industry to regain and retain a competitive edge.

On-premises means rigid resources

Many investment firms have relied on physical datacenters as a means of maintaining control and security. But such properties and legacy systems are costly to maintain and difficult to scale. Given market volatility, fee compression, and an overall competitive investment landscape, fund managers are seeking flexible solutions to discover, create, and implement superior investment strategies and products. Specifically, the need is for enterprise-wide analytics, data, reporting, and data storage.

Cloud elasticity is a vital attribute

Axioma offers an open and flexible platform, where each building block is accessible via APIs. Their platform is a cloud-native architecture but modularity allows seamless integration points with other best-of-breed providers. For example, Axioma Risk is an enterprise-wide multi-asset class (MAC) risk-management platform. With the solution, asset managers can efficiently scale assets under management (AUM) to drive revenue growth and reduce the effects of margin compression.

Build a unified platform with Azure

When using Azure to build a platform, the users of the platform benefit from a common architecture. For example, Axioma helps to migrate solutions to their platform axiomaBlue. The clients then benefit from a common engine that calculates risk and performance analytics. Having one engine on the platform also means using the same underlying market and reference data. Clients, therefore, have a consistent view of risk and return across their enterprise and across front, middle, and back-office functions.

On a specialized platform, users can create flexible, modular, workflow solutions. For financial services, the platform approach means a highly specialized set of components, as shown in this graphic.

Diagram-showing-a-highly-specialized[2]

Azure services used

Axioma is a primary example of using the elastic and modular attributes of Azure to its fullest extent. They use these Azure services:

Recommended next steps

Go to the Azure marketplace listing for AxiomaBlue and click Contact me.

A look at Azure’s automated machine learning capabilities

$
0
0

The automated machine learning capability in Azure Machine Learning service allows data scientists, analysts, and developers to build machine learning models with high scalability, efficiency, and productivity all while sustaining model quality. Automated machine learning builds a set of machine learning models automatically, intelligently selecting models for training then recommending the best one for your scenario and data set. Traditional machine learning model development is resource-intensive requiring both significant domain knowledge and time to produce and compare dozens of models.

With the announcement of automated machine learning in Azure Machine Learning service as generally available last December, we have started the journey to simplify artificial intelligence (AI). This helps data scientists who want to automate part of their machine learning workflow so they can spend more time focusing on other business objectives. It also makes AI available for a wider audience of business users who don’t have advanced data science and coding knowledge.

We are furthering our investment for accelerating productivity with this release that includes exciting capabilities and features in the areas of model quality, improved model transparency, the latest integrations, ONNX support, a code-free user interface, time series forecasting, and product integrations.

1. Automated machine learning no-code web interface (preview)

Continuing our mission to simplify machine learning, Azure introduced the automated machine learning web user interface in Azure portal. The web user interface enables business domain experts to train models on their data, without writing a single line of code. Users can simply bring their data and, with a few clicks, start training on it. After automated machine learning comes up with the best possible model, customized to the user’s data, they can deploy the model to Azure machine learning service as a web service to generate future predictions on new data.

To start exploring the automated machine learning UI, simply go to Azure portal and navigate to an Azure machine learning workspace, where you will see “Automated machine learning” under the “Authoring” section. If you don’t have an Azure machine learning workspace yet, you can always learn how to create a workspace. To learn more, refer to the automated machine learning UI blog.

Gif image for creating a new automated machine learning experiement

2. Time series forecasting

Building forecasts is an integral part of any business, whether it’s revenue, inventory, sales, or customer demand. Forecasting with automated machine learning is now generally available. These capabilities improve the accuracy and performance of recommended models with time series data including a predict forecast function, rolling cross validation splits for time series data, configurable lags, window aggregation, and a holiday featurizer. This ensures high accuracy forecasting models and supporting automation for machine learning across many scenarios.

To learn more, refer to the how to guide with time series data and samples on GitHub.

3. Model transparency

We understand transparency is very important for you to trust the models recommended by automated machine learning.

  • Now you can understand all steps in the machine learning pipeline including automated featurization (if you set preprocess=True). Learn more about all the preprocessing and featurization steps that automated machine learning performs. You can also programmatically understand how your input data got preprocess and featurized, what kind of scaling and normalization was done and the exact machine learning algorithm and hyperparameter values for a chosen machine learning pipeline. Follow these steps to learn more.
  • Model interpretability (feature importance) was enabled as a preview capability back in December. Since then, we have made improvements including significant performance boost.

4. ONNX Models (preview)

In many enterprises, data scientists build models in Python since the popular machine learning frameworks are in Python. Many Azure Machine Learning service users also create models using Python. However, in many deployment environments, line of business applications are written in C# or Java, requiring users to “recode” the model. This adds a lot of friction as many times models never get deployed into production. With ONNX support, users can build ONNX models using automated machine learning and integrate with C# applications, without recoding.

To find out more information, please visit GitHub notebook.

5. Enabling .NET developers using Visual Studio/VS Code (preview)

Empower your applications with automated machine learning while remaining in the comfort of the .NET ecosystem. The .NET automated machine learning API enables developers to leverage automated machine learning capabilities without needing to learn Python. Seamlessly integrate automated machine learning within your existing .NET project by using the API's NuGet package. Tackle your binary classification, multiclass classification, and regression tasks within Visual Studio and Visual Studio Code.

6. Empowering data analysts in PowerBI (preview)

We have enabled data analysts and BI professionals using PowerBI to build, deploy, and inference machine learning models, all within PowerBI. This integration allows PowerBI customers to use their data in PowerBI dataflows and leverage the power of automated machine learning capability of Azure Learning service to build models with a no-code experience and then deploy and use the models from PowerBI. Imagine the kind of machine learning powered PowerBI applications and reports you can create with this capability.

7. Automated machine learning in SQL Server

If you are looking to build models using your data in SQL server using your favorite SQL Server Management Studio interface, you can now leverage automated machine learning in Azure Machine Learning service to build, deploy, and use models. This is made possible by simply wrapping python-based machine learning training and inferencing scripts in SQL stored procedures. This is well suited for use with data residing in SQL Server tables and provides an ideal solution for any version of SQL Server that supports SQL Server Machine Learning Services.

8. Automated machine learning in Spark

HDInsight has been integrated with automated machine learning. With this integration, customers who use automated machine learning can now effortlessly process massive amounts of data and get all the benefits of a broad, open source ecosystem with the global scale of Azure to run automated machine learning experiments. HDInsight allows customers to provision clusters with hundreds of nodes. Automated machine learning running on Apache Spark in the HDInsight cluster, allows users to use compute capacity across these nodes to be able to run training jobs at scale, as well as running multiple training jobs in parallel. This allows users to run automated machine learning experiments while sharing the compute with their other big data workloads. To find out more information, please visit GitHub notebooks and documentation.

We support automated machine learning on Azure Databricks clusters with a simple installation of the SDK in the cluster. You can get started by visiting the “Azure Databricks” section in our documentation, “Configure a development environment for Azure Machine Learning.”

Improved accuracy and performance

Since we announced general availability back in December, we have added several new capabilities to generate high quality models in a shorter amount of time.

  • An intelligent stopping capability that automatically figures out when to stop an experiment based on progress made on the primary metric. If no significant improvement is seen in the primary metric, an experiment is automatically stopped saving you time and compute.

  • With the goal of exploring a greater number of model pipelines in a given amount of time, users can leverage a sub-sampling strategy to train much faster, while minimizing loss.

  • Specify preprocess=True, to intelligently search across different featurization strategies to find the best one for the specified data with the goal of getting to a better model. Learn more about the various preprocessing/featurization steps.

  • XGBoost is available to the set of learners automated machine learning explores, as we see XGBoost models performing well.

  • Improved support for larger datasets, currently supporting datasets up to 10GB in size.

Learn more

Automated machine learning makes machine learning more accessible for data scientists of all levels of experience. Get started by visiting our documentation and let us know what you think. We are committed to making automated machine learning better for you!

Learn more about the Azure Machine Learning service.

Get started with a free trial of the Azure Machine Learning service.

Announcing self-serve experience for Azure Event Hubs Clusters

$
0
0

For businesses today, data is indispensable. Innovative ideas in manufacturing, health care, transportation, and financial industries are often the result of capturing and correlating data from multiple sources. Now more than ever, the ability to reliably ingest and respond to large volumes of data in real time is the key to gaining competitive advantage for consumer and commercial businesses alike. To meet these big data challenges, Azure Event Hubs offers a fully managed and massively scalable distributed streaming platform designed for a plethora of use cases from telemetry processing to fraud detection.

Event Hubs has been immensely popular with Azure’s largest customers and now even more so with the recent release of Event Hubs for Apache Kafka. With this powerful new capability, customers can stream events from Kafka applications seamlessly into Event Hubs without having to run Zookeeper or manage Kafka clusters, all while benefitting from a fully managed platform-as-a-service (PaaS) with features like auto-inflate and geo-disaster recover. As the front door to Azure’s data pipeline, customers can also automatically Capture streaming events into Azure Storage or Azure Data Lake, or natively perform real-time analysis on data streams using Azure Stream Analytics.

Azure Event Hubs Pipeline

For customers with the most demanding streaming needs, Event Hubs clusters in our Dedicated tier provide a single-tenant offering that guarantees the capacity to ingest millions of events per second while boasting a 99.99% SLA. Clusters are used by the Xbox One Halo team, as well as powers both Microsoft Teams and Microsoft Office client application telemetry pipelines.

Azure portal experience for Event Hubs clusters

Today, we are excited to announce that Azure Event Hubs clusters can be easily created through the Azure portal or through the Azure Resource Manager as a self-serve experience (preview), and is instantly available with no further setup. Within your cluster, you can subsequently create and manage namespaces and event hubs per usual and ingest events with no throttling. Creating a cluster to contain your event hubs offers the following benefits:

  • Single tenant hosting for better performance with guaranteed capacity at full scale, enabling ingress of gigabytes of streaming data at millions of events per second while maintaining fully durable storage and sub-second latency.
  • Capture feature included at no additional cost, which allows you to effortlessly batch and deliver your events to Azure Storage or Azure Data Lake.
  • Significant savings on your Event Hubs cloud costs with fixed hourly billing while scaling your infrastructure with Dedicated Event Hubs.
  • No maintenance since we take care of load balancing, security patching, and OS updates. You can spend less time on infrastructure maintenance and more time building client-side features.
  • Exclusive access to upcoming features like bring your own key (BYOK).

In the self-serve experience (preview), you can create 1 CU clusters in the following strategic regions through the Azure portal:

  • North Europe
  • West Europe
  • US Central
  • East US
  • East US 2
  • West US
  • West US 2
  • North US
  • South Central US
  • South East Asia
  • UK South

Larger clusters of up to 20 CUs or clusters in regions not listed above will also be available upon direct request to the Event Hubs team.

Data is key to staying competitive in this fast moving world and Azure Event Hubs can help your organization gain the competitive edge. With so many possibilities, it’s time to get started.

Securing the hybrid cloud with Azure Security Center and Azure Sentinel

$
0
0

Infrastructure security is top of mind for organizations managing workloads on-premises, in the cloud, or hybrid. Keeping on top of an ever-changing security landscape presents a major challenge. Fortunately, the power and scale of the public cloud has unlocked powerful new capabilities for helping security operations stay ahead of the changing threat landscape. Microsoft has developed a number of popular cloud based security technologies that continue to evolve as we gather input from customers. Today we’d like to break down a few key Azure security capabilities and explain how they work together to provide layers of protection.

Azure Security Center provides unified security management by identifying and fixing misconfigurations and providing visibility into threats to quickly remediate them. Security Center has grown rapidly in usage and capabilities, and allowed us to pilot many new solutions, including a security information and event management (SIEM)-like functionality called investigations. While the response to the investigations experience was positive, customers asked us to build out more capabilities. At the same time, the traditional business model of Security Center, which is priced per resource such as per virtual machine (VM), doesn’t necessarily fit for SIEM. We realized that our customers needed a full-fledged standalone SIEM solution that stood apart from and integrated with Security Center, so we created Azure Sentinel. This blog post clarifies what each product does and how Azure Security Center relates to Azure Sentinel.

Going forward, Security Center will continue to develop capabilities in three main areas:

  1. Cloud security posture management: Security Center provides you with a bird’s eye security posture view across your Azure environment, enabling you to continuously monitor and improve your security posture using the Azure secure score. Security Center helps you identify and perform the hardening tasks recommended as security best practices and implement them across your machines, data services, and apps. This includes managing and enforcing your security policies and making sure your Azure Virtual Machine instances, non-Azure servers, and Azure PaaS services are compliant. With newly added IoT capabilities, you can now reduce attack surface for your Azure IoT solution and remediate issues before they can be exploited. We will continue to expand our resource coverage and the depth insights that are available in security posture management. In addition to providing full visibility into the security posture of your environment, Security Center also provides visibility into the compliance state of your Azure environment against common regulatory standards.
  2. Cloud workload protection: Security Center's threat protection enables you to detect and prevent threats at the infrastructure-as-a-service (IaaS) layer as well as in platform-as-a-service (PaaS) resources like Azure IoT and Azure App Service and on-premises virtual machines. Key features of Security Center threat protection include config monitoring, server endpoint detection and response (EDR), application control, network segmentation, and is extending to support container and serverless workloads.
  3. Data security: Security Center includes capabilities that identify breaches and anomalous activities against your SQL databases, data warehouse, and storage accounts, and will be extending to other data services. In addition, Security Center helps you perform automatic classification of your data in Azure SQL database.

When it comes to cloud workload protection, the goal is to present the information to users within Security Center in an easy-to-consume manner so that you can address individual threats. Security Center is not intended for advanced security operations (SecOps) hunting scenarios or to be a SIEM tool.

Going forward SIEM and security orchestration and automated response (SOAR) capabilities will be delivered in Azure Sentinel. Azure Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.

Azure Sentinel is your service organization control (SOC) view across the enterprise, alleviating the stress of increasingly sophisticated attacks, increasing volumes of alerts, and long resolution timeframes. With Azure Sentinel you can:

  • Collect data at cloud scale across all users, devices, applications, and infrastructure, both on-premises and in multiple clouds.
  • Integrate curated alerts from Microsoft’s security products like Security Center, Microsoft Threat Protection, and from your non-Microsoft security solutions.
  • Detect previously undetected threats and minimize false positives using Microsoft Intelligent Security Graph, which uses trillions of signals from Microsoft services and systems around the globe to identify new and evolving threats. Investigate threats with artificial intelligence and hunt for suspicious activities at scale, tapping into years of cyber security experience at Microsoft.
  • Respond to incidents rapidly with built-in orchestration and automation of common tasks.

SIEMs typically integrate with a broad range of applications including threat intelligence applications for specific workloads, and the same is true for Azure Sentinel. SecOps has the full power of querying against the raw data, using AI models, even building your own model.

So how does Azure Security Center relate to Azure Sentinel?

Security Center is one of the many sources of threat protection information that Azure Sentinel collects data from, to create a view for the entire organization. Microsoft recommends that customers using Azure use Azure Security Center for threat protection of workloads such as VMs, SQL, Storage, and IoT, in just a few clicks can connect Azure Security Center to Azure Sentinel. Once the Security Center data is in Azure Sentinel, customers can combine that data with other sources like firewalls, users, and devices, for proactive hunting and threat mitigation with advanced querying and the power of artificial intelligence.

Diagram representing how Azure Sentinel connects with Azure Security Center

Are there any changes to Security Center as a result of this strategy?

To reduce confusion and simplify the user experience, two of the early SIEM-like features in Security Center, namely investigation flow in security alerts and custom alerts will be removed in the near future. Individual alerts remain in Security center, and there are equivalents for both security alerts and custom alerts in Azure Sentinel.

Going forward, Microsoft will continue to invest in both Azure Security Center and Azure Sentinel. Azure Security Center will continue to be the unified infrastructure security management system for cloud security posture management and cloud workload protection. Azure Sentinel will continue to focus on SIEM.

To learn more about both products, please visit the Azure Sentinel home page or Azure Security Center home page.


Empowering clinicians with mobile health data: Right information, right place, right time

$
0
0

Improving patient outcomes and reducing healthcare costs depends on healthcare providers such as doctors, nurses, and specialized clinician ability to access a wide range of data at the point of patient care in the form of health records, lab results, and protocols. Tactuum, a Microsoft partner, provides the Quris solution that empowers clinicians with access to the right information, the right place, at the right time, enabling them to do their jobs efficiently and with less room for error.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Information fragmentation results in poor quality of care

A patient is brought into the emergency department with a deep cut to the leg. The wound is several days old and the patient is exhibiting symptoms of illness, perhaps infection. As a clinician, you know the hospital has a clear protocol for wound management and possible infections. Do you know where to find this information quickly? Is it on a wiki, internal website, or on paper in a binder? Lastly, is it current? Finding the right information in these conditions can be time-consuming and stressful. Or worse, it could be inaccurate and out of date.

In many healthcare provider organizations today, information is fragmented between electronic health records (EHR), on-line third-party sites, intranet sites, and on paper. Additionally, some information may be on secured sites, not visible to everyone and data disappears if it’s unavailable offline. This situation can be detrimental to the quality of patient care because critical data is available too late or not at all. Even with internet access, the wrong information may come from a search engine. So aside from the logistical challenges of making data available, it’s important to ensure that only the right information is found. So, the enduring challenge is getting the right information to the right person, in the right place, and at the right time.

The searchability cost of file systems

Even a facility with modern IT resources such as computers, tablets, or specialized instruments presents obstacles in the search for information. Users must navigate through the network and tunnel into folders, backtracking if they are wrong. Some folders may not be available to everyone or require asking for permission when time is of the essence. Websites and apps may also require authorization. So what happens if a device is offline? Computer systems present their own hurdles to quick access.

Solution

The challenge has become a problem-to-solve for one Microsoft partner, Tactuum, who created the Quris Clinical Companion. Working with some leading hospitals, including the University of Washington and the University of Michigan, they are solving the problem for healthcare. From the Tactuum website comes this description:

“Our flagship product allows organizations to push out to staff, in real-time, the latest guidelines, protocols, algorithms, calculators and clinical handbooks. Put your existing clinical resources into clinicians’ hands right now and know that they're using the latest and most up-to-date information.”

Tactuum has a few notable goals:

  • Right information: The content is vetted, with security safeguards. The content is easy to use, and data consumption insights are provided.
  • Right place: Available where you need it through mobile devices, workstations, and EHR systems.
  • Right time: Available on and offline. When online, real-time updates become possible.
  • Right cost: Minimal IT involvement, low maintenance, and no paper or printing required.

The graphic below illustrates the components and workflow of the system.

Infographic for Clinical Kowledge Manager (CKM)

Benefits

  • Improve quality of care due to more effective decision-making (quicker and more reliable).
  • Save money on printing requirements, easier maintenance, and streamlined distribution.
  • Innovation through powerful data and analytics.

The solution supports improving patient outcomes with critical information at the point of patient care, saving both time and money. Here’s one example, according to a registered nurse and Quris user at Airlift Northwest in Seattle:

“Time savings has been immeasurable. In the past it was required to have a workgroup of staff, educators, and medical directors to review and update the hardcopy “Bluebook.” This was very expensive and required significant time. Now, a smaller group reviews policies and resources, does updates, and uploads it directly to the organization’s server for immediate use.”

Azure services

The Microsoft Azure worldwide presence and extensive compliance portfolio provide the backbone of the Quris solution, including the following key services:

  • Web Apps: Supports Windows and Linux
  • Blob Storage: Multiple blob types, hot, cool, and archive tiers
  • Azure Active Directory: Identity services that work with your on-premises, cloud, or hybrid environment
  • Azure SQL Database: Unmatched scale and high availability for compute and storage
  • Xamarin: Connects apps to enterprise systems, in the cloud or on premises

Next steps

To see more about Azure in the healthcare industry see Azure for health.

Go to the Azure Marketplace listing for Quris and select Contact me.

AzureR and AzureKeyVault

$
0
0

by Hong Ooi, senior data scientist, Microsoft Azure

Just a couple of announcements regarding my family of packages for working with Azure from R.

First, the packages have moved from the cloudyr org on GitHub to the Azure org, thus making them "official". A (rather spartan) homepage is here, containing links to the individual repos:

https://github.com/Azure/AzureR

The cloudyr repos will remain, but going forward they'll be mirrors of the main repos in Azure. Please submit issues and PRs to the Azure repos.

Second, the AzureKeyVault package is now available, on GitHub and on CRAN! This is an interface to the Key Vault service, which allows secure storage of cryptographic keys, certificates, and generic secrets. Both Resource Manager and client interfaces are provided. The package allows you to carry out operations such as encryption and decryption, certificate signing, and managing storage account keys.

Some sample code:

 

Any comments and feedback welcome.

Windows 10 SDK Preview Build 18908 available now!

$
0
0

Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18908 or greater). The Preview SDK Build 18908 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017 and 2019. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1903 or earlier to the store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2019 here.
  • This build of the Windows SDK will install ONLY on Windows 10 Insider Preview builds.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following static URL: https://software-download.microsoft.com/download/sg/Windows_InsiderPreview_SDK_en-us_18908_1.iso.

Tools Updates

Message Compiler (mc.exe)

  • Now detects the Unicode byte order mark (BOM) in .mc files. If the If the .mc file starts with a UTF-8 BOM, it will be read as a UTF-8 file. Otherwise, if it starts with a UTF-16LE BOM, it will be read as a UTF-16LE file. If the -u parameter was specified, it will be read as a UTF-16LE file. Otherwise, it will be read using the current code page (CP_ACP).
  • Now avoids one-definition-rule (ODR) problems in MC-generated C/C++ ETW helpers caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of MCGEN_EVENTWRITETRANSFER are linked into the same binary, the MC-generated ETW helpers will now respect the definition of MCGEN_EVENTWRITETRANSFER in each .cpp file instead of arbitrarily picking one or the other).

Windows Trace Preprocessor (tracewpp.exe)

  • Now supports Unicode input (.ini, .tpl, and source code) files. Input files starting with a UTF-8 or UTF-16 byte order mark (BOM) will be read as Unicode. Input files that do not start with a BOM will be read using the current code page (CP_ACP). For backwards-compatibility, if the -UnicodeIgnore command-line parameter is specified, files starting with a UTF-16 BOM will be treated as empty.
  • Now supports Unicode output (.tmh) files. By default, output files will be encoded using the current code page (CP_ACP). Use command-line parameters -cp:UTF-8 or -cp:UTF-16 to generate Unicode output files.
  • Behavior change: tracewpp now converts all input text to Unicode, performs processing in Unicode, and converts output text to the specified output encoding. Earlier versions of tracewpp avoided Unicode conversions and performed text processing assuming a single-byte character set. This may lead to behavior changes in cases where the input files do not conform to the current code page. In cases where this is a problem, consider converting the input files to UTF-8 (with BOM) and/or using the -cp:UTF-8 command-line parameter to avoid encoding ambiguity.

TraceLoggingProvider.h

  • Now avoids one-definition-rule (ODR) problems caused by conflicting configuration macros (e.g. when two .cpp files with conflicting definitions of TLG_EVENT_WRITE_TRANSFER are linked into the same binary, the TraceLoggingProvider.h helpers will now respect the definition of TLG_EVENT_WRITE_TRANSFER in each .cpp file instead of arbitrarily picking one or the other).
  • In C++ code, the TraceLoggingWrite macro has been updated to enable better code sharing between similar events using variadic templates.

Breaking Changes

Removal of IRPROPS.LIB

In this release irprops.lib has been removed from the Windows SDK. Apps that were linking against irprops.lib can switch to bthprops.lib as a drop-in replacement.

API Updates, Additions and Removals

The following APIs have been added to the platform since the release of Windows 10 SDK, version 1903,build 18362.

Additions:


namespace Windows.Foundation.Metadata {
  public sealed class AttributeNameAttribute : Attribute
  public sealed class FastAbiAttribute : Attribute
  public sealed class NoExceptionAttribute : Attribute
}
namespace Windows.Graphics.Capture {
  public sealed class GraphicsCaptureSession : IClosable {
    bool IsCursorCaptureEnabled { get; set; }
  }
}
namespace Windows.Management.Deployment {
  public enum DeploymentOptions : uint {
    AttachPackage = (uint)4194304,
  }
  public sealed class PackageManager {
    IIterable<Package> FindProvisionedPackages();
  }
}
namespace Windows.Networking.BackgroundTransfer {
  public sealed class DownloadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
  public sealed class UploadOperation : IBackgroundTransferOperation, IBackgroundTransferOperationPriority {
    void RemoveRequestHeader(string headerName);
    void SetRequestHeader(string headerName, string headerValue);
  }
}
namespace Windows.Storage {
  public sealed class StorageFile : IInputStreamReference, IRandomAccessStreamReference, IStorageFile, IStorageFile2, IStorageFilePropertiesWithAvailability, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFile> GetFileFromPathForUserAsync(User user, string path);
  }
  public sealed class StorageFolder : IStorageFolder, IStorageFolder2, IStorageFolderQueryOperations, IStorageItem, IStorageItem2, IStorageItemProperties, IStorageItemProperties2, IStorageItemPropertiesWithProvider {
    public static IAsyncOperation<StorageFolder> GetFolderFromPathForUserAsync(User user, string path);
  }
}
namespace Windows.UI.Composition.Particles {
  public sealed class ParticleAttractor : CompositionObject
  public sealed class ParticleAttractorCollection : CompositionObject, IIterable<ParticleAttractor>, IVector<ParticleAttractor>
  public class ParticleBaseBehavior : CompositionObject
  public sealed class ParticleBehaviors : CompositionObject
  public sealed class ParticleColorBehavior : ParticleBaseBehavior
  public struct ParticleColorBinding
  public sealed class ParticleColorBindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleColorBinding>>, IMap<float, ParticleColorBinding>
  public enum ParticleEmitFrom
  public sealed class ParticleEmitterVisual : ContainerVisual
  public sealed class ParticleGenerator : CompositionObject
  public enum ParticleInputSource
  public enum ParticleReferenceFrame
  public sealed class ParticleScalarBehavior : ParticleBaseBehavior
  public struct ParticleScalarBinding
  public sealed class ParticleScalarBindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleScalarBinding>>, IMap<float, ParticleScalarBinding>
  public enum ParticleSortMode
  public sealed class ParticleVector2Behavior : ParticleBaseBehavior
  public struct ParticleVector2Binding
  public sealed class ParticleVector2BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector2Binding>>, IMap<float, ParticleVector2Binding>
  public sealed class ParticleVector3Behavior : ParticleBaseBehavior
  public struct ParticleVector3Binding
  public sealed class ParticleVector3BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector3Binding>>, IMap<float, ParticleVector3Binding>
  public sealed class ParticleVector4Behavior : ParticleBaseBehavior
  public struct ParticleVector4Binding
  public sealed class ParticleVector4BindingCollection : CompositionObject, IIterable<IKeyValuePair<float, ParticleVector4Binding>>, IMap<float, ParticleVector4Binding>
}
namespace Windows.UI.ViewManagement {
  public enum ApplicationViewMode {
    Spanning = 2,
  }
}
namespace Windows.UI.WindowManagement {
  public sealed class AppWindow {
    void SetPreferredTopMost();
    void SetRelativeZOrderBeneath(AppWindow appWindow);
  }
  public enum AppWindowPresentationKind {
    Spanning = 4,
  }
  public sealed class SpanningPresentationConfiguration : AppWindowPresentationConfiguration
}

The post Windows 10 SDK Preview Build 18908 available now! appeared first on Windows Developer Blog.

Going all in on ‘Suggest a Feature’ in Visual Studio Developer Community

$
0
0

In October 2018, we announced that the Developer Community site we have used for reporting issues will work for feature requests in one convenient place. We also shared the plan to migrate from UserVoice forum to Developer Community. Since then, we have received and responded to over 2500 new feature suggestions on Developer Community with hundreds of those shipped in Visual Studio. Thank you for making the move and continuing to help us improve the functionality in Visual Studio! Now that feature suggestions are fully up and running on Developer Community, we have taken the final step of the move by turning off the UserVoice forum. If you have an idea or a request for a feature, you can now use the Suggest a Feature in Developer Community site or Visual Studio (as shown below) and make your suggestions.

Developer Community

Visual Studio

You can also browse suggestions from other developers and Vote for your favorite features to help us understand the impact to the community.

Thank you! We are looking forward to hearing your suggestions.  We also encourage you to learn more about suggestions to get the best out of it.  Thank you for the valuable feedback you provide in Visual Studio and your participation in Developer Community!

The post Going all in on ‘Suggest a Feature’ in Visual Studio Developer Community appeared first on The Visual Studio Blog.

Microsoft, Nasdaq, and Refinitiv empower everyday investors with real-time data and insights in Excel

Ask Me Anything – “Network” with teams from Azure Networking!

$
0
0

Which 3rd party devices are supported for connecting to Azure VPN Gateway? Can I connect to multiple sites from the same virtual network? Ask these questions and more during the next Ask Me Anything (AMA) session via Twitter on Tuesday, June 11, 2019 from 10:00 AM to 11:30 AM Pacific Time.

This is your opportunity to ask questions about our products, services, or even the team, directly to members of these teams:

Tell us about your experiences, we want your valuable insights into how we can improve the service.

To get involved, follow @AzureSupport on Twitter and send a tweet with the hashtag  "#AzNetworkingAMA". Then during the event, members from the product teams will start answering your questions.

How it works

AMA stands for Ask Me Anything, which is a less formal way to get answers to your questions directly from the engineers and product managers. It’s an opportunity for a live conversation with the experts who are responsible for building and maintaining Azure services.

During the live session, you can ask questions by tweeting at @AzureSupport  with the hashtag #AzNetworkingAMA. Your question can span multiple tweets by replying to first tweet you post with this hashtag.

If I’m in a different time zone, no problem. Start tweeting your questions in advance and we’ll answer during the event.

You really can ask anything you’d like, but here’s a list of question ideas to get you started:

  • What’s the difference between App Gateway and VPN Gateway?
  • Can I delegate an Azure DNS subdomain?
  • What features are currently planned or in development?
  • What is the difference between App Gateway and Azure Load Balancer?
  • How much do I get charged for App Gateway?
  • Why should I use the V2 SKU of App Gateway vs the V1?
  • How does App Gateway compare with Azure Front Door?
  • Can I use App Gateway for purely “private” (not internet facing) applications?
  • Which protocols are supported on Azure VPN Gateway?

The Azure Networking AMA is a great way for you to get inside the minds that build the products you love, and continues our series of AMAs that connect customers directly with developers. To learn more about some of our previous AMAs, you can read about the Azure Backup AMA and the Azure Integration Services AMA.

Get out and tweet @AzureSupport.

Using Text Analytics in call centers

$
0
0

Azure Cognitive Services provides Text Analytics APIs that simplify extracting information from text data using natural language processing and machine learning. These APIs wrap pre-built language processing capabilities, for example, sentiment analysis, key phrase extraction, entity recognition, and language detection.

Using Text Analytics, businesses can draw deeper insights from interactions with their customers. These insights can be used to create management reports, automate business processes, for competitive analysis, and more. One area that can provide such insights is recorded customer service calls which can provide the necessary data to:

  • Measure and improve customer satisfaction
  • Track call center and agent performance
  • Look into performance of various service areas

In this blog, we will look at how we can gain insights from these recorded customer calls using Azure Cognitive Services.

Using a combination of these services, such as Text Analytics and Speech APIs, we can extract information from the content of customer and agent conversations. We can then visualize the results and look for trends and patterns.

Diagram showing how combination of Cognitive Services can extract information

The sequence is as follows:

  • Using Azure Speech APIs, we can convert the recorded calls to text. With the text transcriptions in hand, we can then run Text Analytics APIs to gain more insight into the content of the conversations.
  • The sentiment analysis API provides information on the overall sentiment of the text in three categories positive, neutral, and negative. At each turn of the conversation between the agent and customer, we can:
    • See how the customer sentiment is improving, staying the same, or declining.
    • Evaluate the call, the agent, or either for their effectiveness in handling customer complaints during different times.
    • See when an agent is consistently able to turn negative conversations into positive or vice versa and identify opportunities for training.
  • Using the key phrase extraction API, we can extract the key phrases in the conversation. This data, in combination with the detected sentiment, can assign categories to a set of key phrases during the call. With this data in hand, we can:
    • See which phrases carry negative or positive sentiment.
    • Evaluate shifts in sentiment over time or during product and service announcements.

Table showing overall sentiment in three text categories

  • Using the entity recognition API, we can extract entities such as person, organization, location, date time, and more. We can use this data, for example, to:
    • Tie the call sentiment to specific events such as product launches or store openings in an area.
    • Use customer mentions of competitors for competitive intelligence and analysis.
  • Lastly, Power BI can help visualize the insights and communicate the patterns and trends to drive to action.

Power BI graph visualizing the insights and communicating the patterns and trends

Using the Azure Cognitive Services Text Analytics, we can gain deeper insights into customer interactions and go beyond simple customer surveys into the content of their conversations.

A sample code implementation of the above workflow can be found on GitHub.


Visual Studio Code May 2019

Build more accurate forecasts with new capabilities in automated machine learning

$
0
0

We are excited to announce new capabilities which are apart of time-series forecasting in Azure Machine Learning service. We launched preview of forecasting in December 2018, and we have been excited with the strong customer interest. We listened to our customers and appreciate all the feedback. Your responses helped us reach this milestone. Thank you.

Featured image, general availability for Automated Machine Learning Time Series Forecasting

Building forecasts is an integral part of any business, whether it’s revenue, inventory, sales, or customer demand. Building machine learning models is time-consuming and complex with many factors to consider, such as iterating through algorithms, tuning your hyperparameters and feature engineering. These choices multiply with time series data, with additional considerations of trends, seasonality, holidays and effectively splitting training data.

Forecasting within automated machine learning (ML) now includes new capabilities that improve the accuracy and performance of our recommended models:

  • New forecast function
  • Rolling-origin cross validation
  • Configurable Lags
  • Rolling window aggregate features
  • Holiday detection and featurization

Expanded forecast function

We are introducing a new way to retrieve prediction values for the forecast task type. When dealing with time series data, several distinct scenarios arise at prediction time that require more careful consideration. For example, are you able to re-train the model for each forecast? Do you have the forecast drivers for the future? How can you forecast when you have a gap in historical data? The new forecast function can handle all these scenarios.

Let’s take a closer look at common configurations of train and prediction data scenarios, when using the new forecasting function. For automated ML the forecast origin is defined as the point when the prediction of forecast values should begin. The forecast horizon is how far out the prediction should go into the future.

In many cases training and prediction do not have any gaps in time. This is the ideal because the model is trained on the freshest available data. We recommend you set your forecast this way if your prediction interval allows time to retrain, for example in more fixed data situations such as financial forecasts rate or supply chain applications using historical revenue or known order volumes.

Ideal use case when training and prediction data have no gaps in time.

When forecasting you may know future values ahead of time. These values act as contextual information that can greatly improve the accuracy of the forecast. For example, the price of a grocery item is known weeks in advance, which strongly influences the “sales” target variable. Another example is when you are running what-if analyses, experimenting with future values of drivers like foreign exchange rates. In these scenarios the forecast interface lets you specify forecast drivers describing time periods for which you want the forecasts (Xfuture). 

If train and prediction data have a gap in time, the trained model becomes stale. For example, in high-frequency applications like IoT it is impractical to retrain the model constantly, due to high velocity of change from sensors with dependencies on other devices or external factors e.g. weather. You can provide prediction context with recent values of the target (ypast) and the drivers (Xpast) to improve the forecast. The forecast function will gracefully handle the gap, imputing values from training and prediction context where necessary.

Using contextual data to assist forecast when training and prediction data have gaps in time.

In other scenarios, such as sales, revenue, or customer retention, you may not have contextual information available for future time periods. In these cases, the forecast function supports making zero-assumption forecasts out to a “destination” time. The forecast destination is the end point of the forecast horizon. The model maximum horizon is the number of periods the model was trained to forecast and may limit the forecast horizon length.

Use case when no gap in time exists between training and prediction data and no contextual data is available.

The forecast model enriches the input data (e.g. adds holiday features) and imputes missing values. The enriched and imputed data are returned with the forecast.

Notebook examples for sales forecast, bike demand and energy forecast can be found on GitHub.

Rolling-origin cross validation

Cross-validation (CV) is a vital procedure for estimating and reducing out-of-sample error for a model. For time series data we need to ensure training only occurs using values to the past of the test data. Partitioning the data without regard to time does not match how data becomes available in production, and can lead to incorrect estimates of the forecaster’s generalization error.

To ensure correct evaluation, we added rolling-origin cross validation (ROCV) as the standard method to evaluate machine learning models on time series data. It divides the series into training and validation data using an origin time point. Sliding the origin in time generates the cross-validation folds.

As an example, when we do not use ROCV, consider a hypothetical time-series containing 40 observations. Suppose the task is to train a model that forecasts the series up-to four time-points into the future. A standard 10-fold cross validation (CV) strategy is shown in the image below. The y-axis in the image delineates the CV folds that will be made while the colors distinguish training points (blue) from validation points (orange). In the 10-fold example below, notice how folds one through nine result in model training on dates future to be included the validation set resulting inaccurate training and validation results.

Cross validation showing training points spread across folds and distributed across time points causing data leakage in validation

This scenario should be avoided for time-series instead, when we use an ROCV strategy as shown below, we preserve the time series data integrity and eliminate the risk of data leakage.

Rolling-Origin Cross Validation (ROCV) showing training points distributed on each fold at the end of the time period to eliminate data leakage during validation

ROCV is used automatically for forecasting. You simply pass the training and validation data together and set the number of cross validation folds. Automated machine learning (ML) will use the time column and grain columns you have defined in your experiment to split the data in a way that respects time horizons. Automated ML will also retrain the selected model on the combined train and validation set to make use of the most recent and thus most informative data, which under the rolling-origin splitting method ends up in the validation set.

Lags and rolling window aggregates

Often the best information a forecaster can have is the recent value of the target. Creating lags and cumulative statistics of the target then increases accuracy of your predictions.

In automated ML, you can now specify target lag as a model feature. Adding lag length identifies how many rows to lag based on your time interval. For example, if you wanted to lag by two units of time, you set the lag length parameter to two.

The table below illustrates how a lag length of two would be treated. Green columns are engineered features with lags of sales by one day and two day. The blue arrows indicate how each of the lags are generated by the training data. Not a number (Nan) are created when sample data does not exist for that lag period.

Table illustrating how a lag length og two would be treated

In addition to the lags, there may be situations where you need to add rolling aggregation of data values as features. For example, when predicting energy demand you might add a rolling window feature of three days to account for thermal changes of heated spaces. The table below shows feature engineering that occurs when window aggregation is applied. Columns for minimum, maximum, and sum are generated on a sliding window of three based on the defined settings. Each row has a new calculated feature, in the case of date January 4, 2017 maximum, minimum, and sum values are calculated using temp values for January 1, 2017, January 2, 2017, and January 3, 2017. This window of “three” shifts along to populate data for the remaining rows.

Table showing feature engineering that occurs when window aggregation is applied.

Generating and using these additional features as extra contextual data helps with the accuracy of the trained model. This is all possible by adding a few parameters to your experiment settings.

Holiday features

For many time series scenarios, holidays have a strong influence on how the modeled system behaves. The time before, during, and after a holiday can modify the series’ patterns, especially in scenarios such as sales and product demand. Automated ML will create additional features as input for model training on daily datasets. Each holiday generates a window over your existing dataset that the learner can assign an effect to. With this update, we will support over 2000 holidays in over 110 countries. To use this feature, simply pass the country code as a part of the time series settings. The example below shows input data in the left table and the right table shows updated dataset with holiday featurization applied. Additional features or columns are generated that add more context when models are trained for improved accuracy.

Training data on left shows without holiday features applied, table on the right shows.

Get started with time-series forecasting in automated ML

With these new capabilities automated ML increases support more complex forecasting scenarios, provides more control to configure training data using lags and window aggregation and improves accuracy with new holiday featurization and ROCV. Azure Machine Learning aims to enable data scientists of all skill levels to use powerful machine learning technology that simplifies their processes and reduces the time spent training models. Get started by visiting our documentation and let us know what you think - we are committed to make automated ML better for you!

Learn more about the Azure Machine Learning service and get started with a free trial.

Clever little C# and ASP.NET Core features that make me happy

$
0
0

Visual StudioI recently needed to refactor my podcast site which is written in ASP.NET Core 2.2 and running in Azure. The Simplecast backed API changed in a few major ways from their v1 to a new redesigned v2, so there was a big backend change and that was a chance to tighten up the whole site.

As I was refactoring I made a few small notes of things that I liked about the site. A few were C# features that I'd forgotten about! C# is on version 8 but there were little happinesses in 6.0 and 7.0 that I hadn't incorporated into my own idiomatic view of the language.

This post is collecting a few things for myself, and you, if you like.

I've got a mapping between two collections of objects. There's a list of all Sponsors, ever. Then there's a mapping of shows where a show might have n sponsors.

Out Var

I have to "TryGetValue" because I can't be sure if there's a value for a show's ID. I wish there was a more compact way to do this (a language shortcut for TryGetValue, but that's another post).

Shows2Sponsor map = null;

shows2Sponsors.TryGetValue(showId, out map); if (map != null) { var retVal = sponsors.Where(o => map.Sponsors.Contains(o.Id)).ToList(); return retVal; } return null;

I forgot that in C# 7.0 they added "out var" parameters, so I don't need to declare the map or its type. Tighten it up a little and I've got this. The LINQ query there returns a List of sponsor details from the main list, using the IDs returned from the TryGetValue.

if (shows2Sponsors.TryGetValue(showId, out var map))
    return sponsors.Where(o => map.Sponsors.Contains(o.Id)).ToList();
return null;

Type aliases

I found myself building JSON types in C# that were using the "Newtonsoft.Json.JsonPropertyAttribute" but the name is too long. So I can do this:

using J = Newtonsoft.Json.JsonPropertyAttribute;

Which means I can do this:

[J("description")] 

public string Description { get; set; }

[J("long_description")] public string LongDescription { get; set; }

LazyCache

I blogged about LazyCache before, and its challenges but I'm loving it. Here I have a GetShows() method that returns a List of Shows. It checks a cache first, and if it's empty, then it will call the Func that returns a List of Shows, and that Func is the thing that does the work of populating the cache. The cache lasts for about 8 hours. Works great.

public async Task<List<Show>> GetShows()

{
Func<Task<List<Show>>> showObjectFactory = () => PopulateShowsCache();
return await _cache.GetOrAddAsync("shows", showObjectFactory, DateTimeOffset.Now.AddHours(8));
}
private async Task<List<Show>> PopulateShowsCache()
{
List<Show> shows = shows = await _simpleCastClient.GetShows();
_logger.LogInformation($"Loaded {shows.Count} shows");
return shows.Where(c => c.Published == true && c.PublishedAt < DateTime.UtcNow).ToList();
}

What are some little things you're enjoying?


Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.



© 2018 Scott Hanselman. All rights reserved.
     

How to optimize your Azure environment

$
0
0

Without the right tools and approach, cloud optimization can be a time-consuming and difficult process. There is an ever growing list of best practices to follow, and it’s constantly in flux as your cloud workloads evolve. Add the challenges and emergencies you face on a day-to-day basis, and it’s easy to understand why it’s hard to be proactive about ensuring your cloud resources are running optimally.

Azure offers many ways to help ensure that you’re running your workloads optimally and getting the most out of your investment.

Three kinds of optimization: organizational, architectural, and tactical

One way to think about these is the altitude of advice and optimization offered: organizational, architectural, or tactical.
At the tactical or resource level, you have Azure Advisor, a free Azure service that helps you optimize your Azure resources for high availability, security, performance, and cost. Advisor scans your resource usage and configuration and provides over 100 personalized recommendations. Each recommendation includes inline actions to make remediating your cloud resource optimizations fast and easy.

Software as a Service plan creation in Partner Center showing seat based billing.

At the other end of the spectrum is Azure Architecture Center, a collection of free guides created by Azure experts to help you understand organizational and architectural best practices and optimize your workloads. This guidance is especially useful when you’re designing a new workload for the cloud or migrating an existing workload from on-premises to the cloud.

Azure Architechture Center main page.

The guides in the Azure Architecture Center range from the Microsoft Cloud Adoption Framework for Azure, which can help guide your organization’s approach to cloud adoption and strategy, to Azure Reference Architectures, which provides recommended architectures and practices for common scenarios like AI, IoT, microservices, serverless, SAP, web apps, and more.

Start small, gain momentum

There are many ways to get started optimizing your Azure environment. You can align as an organization on your cloud adoption strategy, you can review your workload architecture against the reference architectures we provide, or you can open up Advisor and see which of your resources have best practice recommendations. Those are just a few examples, ultimately it’s a choice only you and your organization can make.

If your organization is like most, it helps to start small and gain momentum. We’ve seen many customers have success kicking off their optimization journey at the tactical or resource level, then the workload level, and ultimately working their way up to the organizational level, where you can consolidate what you’ve learned and implement policy.

Get started with Azure Advisor

When you visit Advisor, you’ll likely find many recommended actions you can take to optimize your environment. Our advice? Don’t get overwhelmed. Just get started. Scan the recommendations for opportunities that are the most meaningful to you and your organization. For some, that might be high availability considerations like VM backup, a common oversight in VM creation, especially when making the transition from dev/test to production. For others, it might be finding cost savings by looking at VMs that are being underutilized.

Azure Advisor recommendation details screen.

Once you’ve found a suitable recommendation, go ahead and remediate it as shown in this video. Optimization is an ongoing process and never really finished, but every step you take is a step in the right direction.

Visit Advisor in the Azure portal to get started reviewing and remediating your recommendations. For more in-depth guidance, visit the Azure Advisor documentation. Let us know if you have a suggestion for Advisor by submitting an idea in our feedback tool here.

Improving Azure DevOps cherry-picking

$
0
0

One of the more powerful git commands is the cherry-pick command. This command takes one or more existing commits and applies each commit’s changes as a new commit on a different branch. This can be an extremely powerful component of many git workflows such as the Azure DevOps team’s Release Flow. To highlight a common use-case for it, let’s talk about hot-fixing release branches.

In this scenario, we have a master branch that devs contribute to. When a release is ready, a release branch is created based off the latest commit in master and a deployment goes out to end users. After people begin using this newly released version, your team starts to get flooded with new bug reports—now it’s time for a hotfix!

As the dev tasked with fixing the bug, you create a hotfix branch (based off the head of the release branch) and commit the necessary changes (commits A and B in Figure 1) to that branch. After you’re satisfied that you have addressed the issue, you then open a pull request (PR) back into the release branch. The next step is to ensure that the next release doesn’t contain the bug—this is exactly when cherry-picking can help. So, you cherry-pick the hotfix commit(s) to a branch based off the head of master and open a PR into master.

Cherry-pick release workflow

Figure 1. Cherry-pick release workflow

Current Azure Repos cherry-pick support

This workflow is so common that Azure DevOps has a built-in capability to cherry-pick a PR’s commits to a new topic branch directly from a browser. However, this can be cumbersome if you need to apply commits to multiple branches at once while also opening new PRs.

For example, inside Microsoft, it is very common for product teams to cherry-pick changes into multiple branches at the same time. For example, the Office team has multiple versions at various stages of deployment at any given time, meaning there are multiple release branches to hotfix.

With the current mechanism, you would have to cherry-pick to each new topic branch and then open a PR from the topic branch into the target branch — for every branch that needed the hotfix. Therefore, we built this extension to help Microsoft product teams, such as Office, and we wanted to share it with everyone!

Multi-cherry-pick extension

In order to provide an easy way to cherry-pick a PR’s commits to multiple branches at once, we added a new context menu item that sits right below the current cherry-pick menu item.

Multi cherry-pick extension context menu button

Figure 2. Multi cherry-pick extension context menu button

For each branch selected, a new topic branch will be created with the applied changes. If the Pull request option is selected, a pull request will be opened to the target branch.

Multi cherry-pick extension user interface

Figure 3. Multi cherry-pick extension user interface

Getting started

  1. Install the extension from the marketplace into your Azure DevOps organization.
  2. Navigate to your pull request.
  3. Select the context menu (…)
  4. Select Multi-cherry-pick.
  5. Add as many cherry-pick targets as you would like.
  6. After you click Complete, a summary page will appear with links to branches and PRs created from the tool.

This makes it much simpler to support workflows where multiple (or even just one) destinations need to have commits applied.

Note: because this extension is open source, you can file feature requests, suggestions, and issues at GitHub.

Lastly, please leave us a review in the marketplace; we’d absolutely love to hear your feedback.

Happy cherry-picking!

The post Improving Azure DevOps cherry-picking appeared first on Azure DevOps Blog.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>