Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

This changes everything for the DIY Diabetes Community – TidePool partners with Medtronic and Dexcom

$
0
0

D8f8EEYVsAAqN-DI don’t speak in hyperbole very often, and I want to make sure that you all understand what a big deal this is for the diabetes DIY community. Everything that we’ve worked for for the last 20 years, it all changes now. #WeAreNotWaiting

"You probably didn’t see this coming, [Tidepool] announced an agreement to partner with our friends at Medtronic Diabetes to support a future Bluetooth-enabled MiniMed pump with Tidepool Loop. Read more here: https://www.tidepool.org/blog/tidepool-loop-medtronic-collaboration"

Translation? This means that diabetics will be able to choose their own supported equipment and build their own supported FDA Approved Closed Loop Artificial Pancreases.

Open Source Artificial Pancreases will become the new standard of care for Diabetes in 2019

Every diabetic engineer every, the day after they were diagnosed, tries to solve their (or their loved one's) diabetes with open software and open hardware. Every one. I did it in the early 90s. Someone diagnosed today will do this tomorrow. Every time.

I tried to send my blood sugar to the cloud from a PalmPilot. Every person diagnosed with diabetes ever, does this. Has done this. We try to make our own systems. Then @NightscoutProj happened and #WeAreNotWaiting happened and we shared code and now we sit on the shoulders of people who GAVE THEIR IDEAS TO USE FOR FREE.

D8gKqkqW4AEsK32

Here's the first insulin pump. Imagine a disease this miserable that you'd choose this. Type 1 Diabetes IS NOT FUN. Now we have Bluetooth and Wifi and the Cloud but I still have an insulin pump I bought off of Craigslist.

D8gK05ZXYAAzSLc

Imagine a watch that gives you an electrical shock so you can check your blood sugar. We are all just giant bags of meat and water under pressure and poking the meatbag 10 times a day with needles and #diabetes testing strips SUUUUCKS.

D8gLNCgWkAAbLLi

The work of early #diabetes pioneers is being now leveraged by @Tidepool_org to encourage large diabetes hardware and sensor manufacturers to - wait for it - INTEROPERATE on standards we can talk to.

D8gLi6kW4AMndv2

D8gL61PWwAA3Tz2Just hours after I got off stage speaking on this very topic at @RefactrTech, it turns out that @howardlook and the wonderful friends at @Tidepool_org like @kdisimone and @ps2 and pioneer @bewestisdoing and others announced there are now partnerships with MULTIPLE insulin pump manufacturers AND multiple sensors!

We the DIY #diabetes community declared #WeAreNotWaiting and, dammit, we'd do this ourselves. And now TidePool expressing the intent to put an Artificial Pancreas in the damn App Store - along with Angry Birds - WITH SUPPORT FOR WARRANTIED NEW BLE PUMPS. I could cry.

You see this #diabetes insulin pump? It’s mine. See those cracks? THOSE ARE CRACKS IN MY INSULIN PUMP. This pump does not have a warranty, but it’s the only one that I have if I want an open source artificial pancreas. Now I’m going to have real choices, multiple manufacturers.

D8gMv8OXoAA4V9oIt absolutely cannot be overstated how many people keep this community alive, from early python libraries that talked to insulin pumps, to man in the middle attacks to gain access to our own data, to custom hardware boards created to bridge the new and the old.

To the known in the unknown, the song in the unsung, we in the Diabetes Community appreciate you all. We are standing on the shoulders of giants - I want to continue to encourage open software and open hardware whenever possible. Get involved. 

Also, if you're diabetic, consider buying a Nightscout Xbox Avatar accessory so you can see yourself represented while you game!

Oh, and one other thing, journalists who cover the Diabetes DIY community, please let us read your articles before you write them. They all have mistakes and over-generalizations and inaccuracies and it's awkward to read them. That is all.


Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.



© 2018 Scott Hanselman. All rights reserved.
     

Microsoft hosts HL7 FHIR DevDays

$
0
0

This blog post was co-authored by Greg Moore, Corporate Vice President, Microsoft Healthcare and Peter Lee, Corporate Vice President, Microsoft Healthcare.

DevDays FHIRcatOne of the largest gatherings of healthcare IT developers will come together on the Microsoft campus next week for HL7 FHIR DevDays, with the goal of advancing the open standard for interoperable health data, called HL7® FHIR® (Fast Healthcare Interoperability Resources, pronounced “fire”). Microsoft is thrilled to host this important conference on June 10-12, 2019 on our Redmond campus, and engage with the developer community on everything from identifying immediate use cases to finding ways for all of us to hack together in ways that help advance the FHIR specification.

We believe that FHIR will be an incredibly important piece of the healthcare future. Its modern design enables a new generation of AI-powered applications and services, and it provides an extensible, standardized format that makes it possible for all health IT systems to not only share data so that it can get to the right people where and when they need it, but also turn that data into knowledge. While real work has been underway for many years on HL7 FHIR, today it has become one of the most critical technologies in health data management, leading to major shifts in both the technology and policy of healthcare. 

Given the accelerating shift of healthcare to the cloud, FHIR in the cloud presents a potentially historic opportunity to advance health data interoperability. For this reason, last summer in Washington, DC, we stood with leaders from AWS, Google, IBM, Oracle, and Salesforce to make a joint pledge to adopt technologies that promote the interoperability of health data. But we all know that FHIR is not magic. To make the liberation of health data a reality, developers and other stakeholders will need to work together, and so this is why community events like HL7 FHIR DevDays are so important. They allow us to try out new ideas in code and discuss a variety of areas, from the basics of FHIR, to its use with medical devices, imaging, research, security, privacy, and patient empowerment.

The summer of 2019 may indeed be the coming of age for FHIR, with the new version of the standard called “FHIR release 4” (R4) reaching broader adoption, new product updates from Microsoft, and new interop policies from the US government that will encourage the industry to adopt FHIR more broadly.

New FHIR standard progressing quickly

Healthcare developers can start building with greater confidence that FHIR R4 will help connect people, data, and systems. R4 is the first version to be “normative,” which means that it’s an official part of the future specification so that all future versions will be backward compatible.

Microsoft adding more FHIR functionality to Azure

Microsoft is doing its part to realize benefits of health data interop with FHIR, and today we’re announcing that our open source FHIR Server for Azure will support FHIR R4 and is available today.

We have added a new data persistence provider implementation to the open source FHIR Server for Azure. The new SQL persistence provider enables developers to configure their FHIR server instance to use either an Azure Cosmos DB backed persistence layer, or a persistence layer using a SQL database, such as Azure SQL Database. This will make it easier for customers to manage their healthcare applications by adding more capabilities for their preferred SQL provider. It will extend the capability of a FHIR server in Azure to support key business workloads with new features such as chained queries and transactions.

Growing ecosystem of customers and partners

Our Azure API for FHIR already has a broad partner ecosystem in place and customers using the preview service to centralize disparate data.

Northwell Health, the largest employer in New York state with 23 hospitals and 700 practices, is using the Azure API for FHIR to build interoperability into its data flow solution to reduce excess days for patients. This ensures the patient only stays for the period that is required for clinical care and there are no other non-clinical reasons are occurring for delays in discharging the patient.

Our open source implementation of FHIR Server for Azure is already creating a tighter feedback loop with developers and partners for our products who have quickly innovated on top of this open source project.

Darena Solutions used the open source FHIR Server for Azure to develop its Blue Button application with a content management system (CMS) called BlueButtonPRO. This will allow patients to import their data from CMS (through Blue Button). More importantly, it allows patients a simple and secure way to download, view, manage, and share healthcare data from any FHIR portals that they have access to.

US Health IT Policy proposal to adopt FHIR

The DevDays conference also comes on the heels of the US government’s proposed ruling to improve interoperability of health data embodied in the 21st Century Cures Act, which includes the use of FHIR.

Microsoft supports the focus in these proposed rules on reducing barriers to interoperability because we are confident that the result will be good for patients. Interoperability and the seamless flow of health data will enable a more informed and empowered consumer. We expect the health industry will respond with greater efficiency, better care, and cost savings.

We're at a pivotal moment for health interoperability, where all the bottom-up development in the FHIR community is meeting top-down policy decision at the federal level.

Health data interoperability at Microsoft

Integrating health data into our platforms is a huge commitment for Microsoft, and Azure with FHIR is just the start. Now that FHIR is baked into the core of Azure, the Microsoft cloud will natively speak FHIR as the language for health data as we plan for all our services to inherit that ability.

Healthcare today and into the future will demand a broad perspective and creative, collaborative problem-solving. Looking ahead, Microsoft intends to continue an open, collaborative dialogue with the industry and community, from FHIR DevDays to the hallways of our customers and partners.

FHIR is a part of our healthcare future, and FHIR DevDays is a great place to start designing for that future.

Announcing Mobility service for Azure Maps, SDKs updates, and more

$
0
0

Mobility has become the center of an array of new technologies running the gamut from cloud-based algorithms and ride-sharing services, to edge cognition, assisted driving, and traffic pattern analysis – all in an effort to move people and things from one location to another more efficiently. These are challenging initiatives that require scale, real-time intelligence, and deep insights. In an effort to begin chipping away at helping to get people moving, Azure Maps is excited to introduce Mobility service APIs for Azure Maps.

The Mobility service will begin by powering public transit routing, enabling organizations to add public transportation information and routing capabilities into their mobility, IoT, logistics, asset tracking, smart cities, and similar solutions.

Request public transit routes and visualize routes on map.

The Mobility service APIs for Azure Maps are brought to life in partnership with Moovit, Inc – a partnership that was announced last year. Natively through Azure Maps applications organizations can use transit routing to serve public transportation data to their customers, as well as to generate deeper insights, with applications spanning smart cities, transportation, automotive, field services, retail, and more.

A collection of operations allow applications to request public transit, bikeshare, scooter share, and car share information to plan their routes leveraging alternative modes of transportation and real-time data. Applications can use the information returned for smart city and IoT scenarios like:

  • Minimizing urban congestion by combining public and private transportation services
  • Leveraging IoT sensor data to enable dynamic routing
  • Simulating the movements of occupants in city environment

The Mobility service also provides additional insights on mobility trends, such as public transit ridership, costs and benefits of different transit modes, justifications for additional public transit, or additional taxation opportunities for roads and parking.

The Mobility service provides the ability to natively request nearby transit objects such as public transit stops, shared bikes, scooters, or cars around a given location and allows users to search for specific object types within a given radius returning a set of transit objects with object details. The returned information can be used for further processing such as requesting real-time arrivals for the stop, or transit stop details such as main transit type of most lines stopping for a given public stop, active service alerts, or main transport agency. Users can request transit line details covering basic information, such as line number and group information, or more detailed information such as line geometry, list of stops, scheduled and real-time transit arrivals, and service alerts.

Show on map nearby transit objects around given location and within specific radius.
Show on map nearby transit objects around given location and within specific radius. 

Customers can also find out how many available shared bikes are left in the closest dock by requesting docking stations information. While searching for available car share vehicles, details such as future availability and current fuel level are included in the response. This information can be used for further processing, such as calling the Azure Maps Route Range API to calculate a reachable range (isochrone) from the origin point based on fuel or time budget and requesting point of interests within the provided isochrone by using the Search Inside Geometry API.

The Mobility service supports trip planning, returning the best possible route options and providing a variety of travel modes, including walking, biking, and public transit available within the metro area (city). The service allows users to request one or multiple public transit types, such as bus, tram, and subway. It also allows users to focus on certain types of bikes and preferences for a specific transit agency operating in the metro area. Also, users have the option to choose optimal routes based on multiple parameters, such as minimal walking, minimal transfers, or specify desired departure or arrival times. The Mobility service support real-time trip planning and provides real-time arrival information for stops and lines. Azure Maps can send notifications to users about service alerts for stops, lines, and metro areas (city), and provide updated times with alternate routes in case of interruptions.

The Mobility service can also return multiple, alternate routes that may not be considered optimal given current condition, but could be preferred by the end user. The service returns data pertaining various legs comprising the route itinerary, including the locations, public transit lines, as well as start and end times. Users can also request transit itinerary details with additional information such geometry of the route and detailed itinerary schedules.

SDK updates

Azure Maps Web SDK

In this release we have added a preview of a new drawing tools module which makes it easy to draw points, lines, and polygons on the map using a mouse or touch. Several new spatial math features have been added around speed and acceleration-based calculations, as well as affine transformations, polygon area calculations, closest point to a line, and convex hulls. The team has also spent a lot of time adding performance enhancement, stability, and improving accessibility.

Azure Maps Android SDK update

Added support for Azure Active Directory authentication, drawing lines and polygons, and raising and handling events.

Spatial Operations for Azure Maps are now generally available

Azure Maps Spatial Operations takes location information then analyzes it on the fly to help inform customers of ongoing events happening in time and space, enabling near real-time analysis and predictive modeling of events. Spatial Operations provides applications enhanced location intelligence with a library of common geospatial mathematical calculations, including services such as closest point, great circle distance, and buffers.

Cartography and styling updates

Light grey map style

We’ve added a new map style, light grey, to our map style offering. A compliment to the dark grey style, the new light grey canvas is created for our customers to visualize their custom data atop lighter contrast map. Like the other styles, this can be used seamlessly with the Azure Maps Web SDK and Android SDK, for example, to create interactive maps with data driven styling, or heatmaps from a data set of point features.

Light grey map style in Azure Maps Level 4.
Zoom level 4

Light grey map style in Azure Maps Level 15.
Zoom level 15

Pedestrian and walking paths

Additional detail has been added for pedestrian and walking paths, including moving to zoom level 14, which has greatly improved the appearance of urban areas and city parks.

Pedestrian and walking paths after cartography update.

Road network layering

To give a more realistic view, we are now showing the layering of tunnels, bridges, underpasses, and overpasses for both vehicle and pedestrian crossings.

Road network layering changes. Before (left image) and after (right image).

Data rendering

To improve styling and usability, certain polygons and labels were pushed up in the data so that they appear at higher levels. Hundreds of cities have been regrouped by size in the data in order to adjust which zoom level cities are showed based on significance. As a result, medium and large cities have been moved to zoom level 4. Due to symbol collision, most medium cities do not show until level 5. In addition, all national, regional, and state parks are now rendered at zoom level 4 instead of zoom level 7.

Data rending improvements on Azure Maps.

We want to hear from you!

We are always working to grow and improve the Azure Maps platform and want to hear from you. We’re here to help and want to make sure you get the most out of the Azure Maps platform.

  • Have a feature request? Add it or vote up the request on our feedback site.
  • Having an issue getting your code to work? Have a topic you would like us to cover on the Azure blog? Ask us on the Azure Maps forums.
  • Looking for code samples or wrote a great one you want to share? Join us on GitHub.
  • To learn more, read the Azure Maps documentation.

Supporting the community with WF and WCF OSS projects

$
0
0

At the Build conference in May 2019, we mentioned that, after we add WinForms, WPF and Entity Framework 6 to .NET Core 3.0, we do not plan to add any more of the technologies from .NET Framework to .NET Core.

This means we will not be adding ASP.NET Web Forms, WCF, Windows Workflow, .NET Remoting and/or the various other smaller APIs to .NET Core. For new applications, there are better technologies that serve a similar purpose and provide more capabilities or better experiences. We think of .NET Core as the framework our customers will build brand new applications or port applications that they are still spending lots of engineering work on.

ASP.NET Blazor – provides a similar component and event-based programming model as ASP.NET Web Forms but generating a SPA (Single Page Application) instead of a traditional web site.

ASP.NET Web API or gRPC – provide APIs and contract-based RPCs that can be used across all devices and platforms.

.NET Core WCF Client – provides the ability for .NET Core projects to call into the existing WCF Servers that run on .NET Framework.

What do you do with your older applications that you are not spending much engineering time on? We recommend leaving these on .NET Framework. If you’re not spending much time on those projects and they meet your business needs, then you should just leave them where they are. You can even modernize those existing applications to Windows containers if you want to run them in containers.

.NET Framework will continue to be supported and will receive minor updates. Even here at Microsoft, many large products will remain on .NET Framework. There are absolutely no changes to support and that will not change in the future. .NET Framework 4.8 is the latest version of .NET Framework and will continue to be distributed with future releases of Windows. If it is installed on a supported version of Windows, .NET Framework 4.8 will continue to be supported too.

If you really want to move one of your older applications to .NET Core and don’t want to migrate it to newer technologies like Web API / gPRC / Cloud based workflow, we are supporting two community efforts that provide ports of Windows Workflow and WCF to .NET Core.

Core WCF

Core WCF is a new community owned project under the .NET Foundation. Microsoft has made an initial contribution of code from a WCF team member to help get the project started. Core WCF is not intending to be a 100% compatible port of WCF to .NET Core, but aims to allow porting of many WCF contract and service implementations with only a change of namespace.

Initially, it will be for HTTP and TCP SOAP services on-top of Kestrel, which are the most commonly used transports on .NET Framework.

This project is not yet ready for production but needs people to get involved and help get it there faster. If you are interested in this, or want more details about the project, then we encourage you to go and explore the Core WFC project on GitHub.

This project has joined the .NET Foundation and you can read about it on the .NET Foundation blog.

Core Workflow

Core WF is a port of Workflow for .NET Core sponsored by UIPath. The project was started by a former Workflow team member and the .NET team has been working to make sure that all the source code they need to do the work of porting Workflow is available to them. This project will need more community help to become a replacement for Workflow on .NET Framework and we encourage anyone who wishes to see Workflow on Core to get involved and see if you can help out.

Conclusion

We’re happy to see these projects be part of the .NET OSS community and hope that you’ll join us in supporting them and other .NET OSS. If you want more information about the .NET Foundation or what you can do to get involved then be sure to checkout the .NET Foundation website.

The post Supporting the community with WF and WCF OSS projects appeared first on .NET Blog.

St. Luke’s transforms clinical collaboration with Microsoft 365 cloud-connected workplace

How modern collaboration tools enhance patient outcomes across healthcare

Azure.Source – Volume 86

$
0
0

News and updates

FHIR

Microsoft hosts HL7 FHIR DevDays

One of the largest gatherings of healthcare IT developers will come together on the Microsoft campus June 10-12 for HL7 FHIR DevDays, with the goal of advancing the open standard for interoperable health data, called HL7® FHIR® (Fast Healthcare Interoperability Resources, pronounced “fire”). Microsoft is thrilled to host this important conference, and engage with the developer community on everything from identifying immediate use cases to finding ways for all of us to hack together in ways that help advance the FHIR specification.

Announcing self-serve experience for Azure Event Hubs Clusters

For businesses today, data is indispensable. Innovative ideas in manufacturing, health care, transportation, and financial industries are often the result of capturing and correlating data from multiple sources. Now more than ever, the ability to reliably ingest and respond to large volumes of data in real time is the key to gaining competitive advantage for consumer and commercial businesses alike. To meet these big data challenges, Azure Event Hubs offers a fully managed and massively scalable distributed streaming platform designed for a plethora of use cases from telemetry processing to fraud detection.

A look at Azure's automated machine learning capabilities

The automated machine learning capability in Azure Machine Learning service allows data scientists, analysts, and developers to build machine learning models with high scalability, efficiency, and productivity all while sustaining model quality. With the announcement of automated machine learning in Azure Machine Learning service as generally available last December, we have started the journey to simplify artificial intelligence (AI). We are furthering our investment for accelerating productivity with a new release that includes exciting capabilities and features in the areas of model quality, improved model transparency, the latest integrations, ONNX support, a code-free user interface, time series forecasting, and product integrations.

Technical content

d422d54d-e2e8-4ea8-9248-ef0d5a0cbbe0

Securing the hybrid cloud with Azure Security Center and Azure Sentinel

Infrastructure security is top of mind for organizations managing workloads on-premises, in the cloud, or hybrid. Keeping on top of an ever-changing security landscape presents a major challenge. Fortunately, the power and scale of the public cloud has unlocked powerful new capabilities for helping security operations stay ahead of the changing threat landscape. Microsoft has developed a number of popular cloud based security technologies that continue to evolve as we gather input from customers. This post breaks down a few key Azure security capabilities and explain how they work together to provide layers of protection.

Customize your automatic update settings for Azure Virtual Machine disaster recovery

In today’s cloud-driven world, employees are only allowed access to data that is absolutely necessary for them to effectively perform their job. The ability to hence control access but still be able to perform job duties aligning to the infrastructure administrator profile is becoming more relevant and frequently requested by customers. When we released the automatic update of agents used in disaster recovery (DR) of Azure Virtual Machines (VMs), the most frequent feedback we received was related to access control. The request we heard from you was to allow customers to provide an existing automation account, approved and created by a person who is entrusted with the right access in the subscription. You asked, and we listened!

Azure Stack IaaS – part nine

Before we built Azure Stack, our program manager team called a lot of customers who were struggling to create a private cloud out of their virtualization infrastructure. We were surprised to learn that the few that managed to overcome the technical and political challenges of getting one set up had trouble getting their business units and developers to use it. It turns out they created what we now call a snowflake cloud, a cloud unique to just their organization. This is one of the main problems we were looking to solve with Azure Stack. A local cloud that has not only automated deployment and operations, but also is consistent with Azure so that developers and business units can tap into the ecosystem. In this blog we cover the different ways you can tap into the Azure ecosystem to get the most value out of IaaS.

What is the difference

What is the difference between Azure Application Gateway, Load Balancer, Front Door and Firewall?

Last week at a conference in Toronto, an attendee came to the Microsoft booth and asked something that has been asked many times in the past. So, this blog post covers all of it here for everyone’s benefit. What are the differences between Azure Firewall, Azure Application Gateway, Azure Load Balancer, Network Security Groups, Azure Traffic Manager, and Azure Front Door? This blog offers a high-level consolidation of what they each do.

Azure shows

Five tools for building APIs with GraphQL | Five Things

Burke and Chris are back and this week they're bringing you five tools for building API's with GraphQL. True story, they shot this at the end of about a twelve hour day and you can see the pain in Burke's eyes. It's not GraphQL he doesn't like, it's filming for six straight hours. Also, Chris picks whistles over bells (because of course he does) and Burke fights to stay awake for four minutes.

Microservices and more in .NET Core 3.0 | On .NET

Enabling developers to build resilient microservices is an important goal for .NET Core 3.0. In this episode, Shayne Boyer is joined by Glenn Condron and Ryan Nowak from the ASP.NET team who discuss some of the exciting work that's happening in the microservice space for .NET Core 3.0.

Interknowlogy mixes Azure IoT and mixed reality | The Internet of Things Show

When mixed reality meets the Internet of Things through Azure Digital Twins, a new way of accessing data materializes. See how Interknowlogy mixes Azure IoT and Mixed Reality to deliver not only stunning experiences but also accrued efficiency and productivity to workforce.

Bring DevOps to your open-source projects: Top three tips for maintainers | The Open Source Show

Baruch Sadogurksy, Head of Developer Relations at JFrog, and Aaron Schlesinger, Cloud Advocate at Microsoft and Project Athens Maintainer, talk about the art of DevOps for Open Source. Balancing contributor needs with the core DevOps principles of people, process, and tools. You'll learn how to future-proof your projects, avoid the dreaded Bus Factor, and get Aaron and Baruch's advice for evaluating and selecting tools, soliciting contributor input and voting, documenting processes, and so much more.

Episode 282 - Azure Front Door Service | The Azure Podcast

Cynthia talks with Sharad Agrawal on what Azure Front Door Service is, how to choose between Azure Front Door Service, CDN, Azure Traffic Manager and App Gateway, and how to get started.

Atley Hunter on the Business of App Development | Azure DevOps Podcast

In this episode, Jeffrey and Atley are discussing the business of app development. Atley describes some of the first apps he’s ever developed, some of the most successful and popular apps he’s ever created, how he’s gone about creating these apps, and gives his tips for other developers in the space.

Industries and partners

Empowering clinicians with mobile health data: Right information, right place, right time

Improving patient outcomes and reducing healthcare costs depends on healthcare providers such as doctors, nurses, and specialized clinician ability to access a wide range of data at the point of patient care in the form of health records, lab results, and protocols. Tactuum, a Microsoft partner, provides the Quris solution that empowers clinicians with access to the right information, the right place, at the right time, enabling them to do their jobs efficiently and with less room for error.

Building a better asset and risk management platform with elastic Azure services

Elasticity means services can expand and contract on demand. This means Azure customers who are on a pay-as-you-go plan will reap the most benefit out of Azure services. Their service is always available, but the cost is kept to a minimum. Together with elasticity, Azure lets modern enterprises migrate and evolve more easily. For financial service providers, the modular approach lets customers benefit from best-of-breed analytics in three key areas. Read the post to learn what they are.

Symantec’s zero-downtime migration to Azure Cosmos DB

How do you migrate live, mission-critical data for a flagship product that must manage billions of requests with low latency and no downtime? The consumer business unit at Symantec faced this exact challenge when deciding to shift from their costly and complex self-managed database infrastructure, to a geographically dispersed and low latency managed database solution on Azure. The Symantec team shared their business requirements and decision to adopt Azure Cosmos DB in a recent case study.

Compute and stream IoT insights with data-driven applications

$
0
0

There is a lot more data in the world than can possibly be captured with even the most robust, cutting-edge technology. Edge computing and the Internet of Things (IoT) are just two examples of technologies increasing the volume of useful data. There is so much data being created that the current telecom infrastructure will struggle to transport it and even the cloud may become strained to store it. Despite the advent of 5G in telecom, and the rapid growth of cloud storage, data growth will continue to outpace the capacities of both infrastructures. One solution is to build stateful, data-driven applications with technology from SWIM.AI.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Shared awareness and communications

The increase in volume has other consequences, especially when IoT devices must be aware of each other and communicate shared information. Peer-to-peer (P2P) communications between IoT assets can overwhelm a network and impair performance. Smart grids are an example of how sensors or electric meters are networked across a distribution grid to improve the overall reliability and cost of delivering electricity. Using meters to determine the locality of issues can help improve service to a residence, neighborhood, municipality, sector, or region. The notion of shared awareness extends to vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications. As networked AI spreads to more cars and devices, so do the benefits of knowing the performance or status of other assets. Other use cases include:

  • Traffic lights that react to the flow of vehicles across a neighborhood.
  • Process manufacturing equipment that can determine the impact from previous process steps.
  • Upstream oil/gas equipment performance that reacts to downstream oil/gas sensor validation.

Problem: Excess data means data loss

When dealing with large volumes of data, enterprises often struggle to determine which data to retain, how much to retain, and for how long they must retain it. By default, they may not retain any of it. Or, they may sub-sample data and retain an incomplete data set. That lost data may potentially contain high value insights. For example, consider traffic information that could be used for efficient vehicle routing, commuter safety, insurance analysis, and government infrastructure reviews. The city of Las Vegas maintains over 1,100 traffic light intersections that can generate more than 45TB of data every day. As stated before, IoT data will challenge our ability to transport and store data at these volumes.

Data may also become excessive when it’s aggregated. For example, telecom and network equipment typically create snapshots of data and send it every 15 minutes. By normalizing this data into a summary over time, you lose granularity. This means the nature or pattern of data over time along with any unique, intuitive events would be missed. The same applies to any equipment capturing fixed-time, window summary data. The loss of data is detrimental to networks where devices share data, either for awareness or communication. The problem is also compounded, as only snapshots are captured and aggregated for an entire network of thousands or millions of devices. Real-time is the goal.

Real-time is the goal

Near real-time is the current standard for stateless application architectures, but “near” real-time is not fast enough anymore. Real-time processing or processing within milliseconds is the new standard for V2V or V2I communications and requires a much more performant architecture. Swim does this by leveraging stateful API’s. With stateful connections, it’s possible to have a rapid response between peers in a network. Speed has enormous effects on efficiency and reliability and it’s essential for systems where safety is paramount such as preventing crashes. Autonomous systems will rely on real-time performance for safety purposes.

An intelligent edge data strategy

SWIM.AI delivers a solution for building scalable streaming applications. According to their site Meet Swim:

“Instead of configuring a separate message broker, app server and database, Swim provides for its own persistence, messaging, scheduling, clustering, replication, introspection, and security. Because everything is integrated, Swim seamlessly scales across edge, cloud, and client, for a fraction of the infrastructure and development cost of traditional cloud application architectures.”

The figure below shows an abstract view of how Swim can simplify IoT architectures:

Diagram display of how Swim can simplify architectures

Harvest data in mid-stream

SWIM.AI uses the lightweight Swim platform, only generating a 2MB footprint to compute and stream IoT insights, building what they call “data-driven applications.” These applications sit in the data stream and generate unique, intelligent web agents for each data source it sees. These intelligent web agents then process the raw data as it streams, only publishing state changes from the data stream. This streamed data can be used by other web agents or stored in a data lake, such as Azure.

Swim uses the “needle in a haystack” metaphor to explain this unique advantage. Swim allows you to apply a metal detector while harvesting the grain to find the needle, without having to bail, transport, or store the grain before searching for the needle. The advantage is in continuously processing data, where intelligent web agents can learn over time or be influenced by domain experts that set thresholds.

Because of the stateful architecture of Swim, only the minimum data necessary is transmitted over the network. Furthermore, application services need not wait for the cloud to establish application context. This results in extremely low latencies, as the stateful connections don’t incur the latency cost of reading and writing to a database or updating based on poll requests.

On SWIM.AI’s website, a Smart City application shows the real-time status of lights and traffic across a hundred intersections with thousands of sensors. The client using the app could be a connected or an autonomous car approaching the intersection. It could be a handheld device next to the intersection, or a browser a thousand miles away in the contiguous US. The latency to real-time is 75-150ms, less than the blink of an eye across the internet.

Benefits

  • The immediate benefit is saving costs for transporting and storing data.
  • Through Swim’s technology, you can retain the granularity. For example, take the case of 10 seconds of TB per day generated from every 1000 traffic light intersections. Winnow that data down to 100 seconds of GB per day. But the harvested dataset fully describes the original raw dataset.
  • Create efficient networked apps for various data sources. For example, achieve peer-to-peer awareness and communications between assets such as vehicles, devices, sensors, and other data sources across the internet.
  • Achieve ultra-low latencies in the 75-150 millisecond range. This is the key to creating apps that depend on data for awareness and communications.

Azure services used in the solution

The demonstration of DataFabric from SWIM.AI relies on core Azure services for security, provisioning, management, and storage. DataFabric also uses the Common Data Model to simplify sharing information with other systems, such as Power BI or PowerApps, in Azure. Azure technology enables the customer’s analytics to be integrated with events and native ML and cognitive services.

DataFabric is based on the Microsoft IoT reference architecture and uses the following core components:

  • IoT Hub: Provides a central point in the cloud to manage devices and their data.
  • IoT Edge Field gateway: An on-premises solution for delivering cloud intelligence.
  • Azure Event Hubs: Ingests millions of events per second.
  • Azure Blob: Efficient storage that includes options for hot, warm and archived data.
  • Azure Data Lake storage: A highly scalable and cost-effective data lake solution for big data analytics.
  • Azure Streaming Analytics: For transforming data into actionable insights and predictions in near real-time.

Next steps

To learn more about other industry solutions, go to the Azure for Manufacturing page.

To find out more about this solution, go to DataFabric for Azure IoT and select Get it now.


Update IoT devices connected to Azure with Mender update manager

$
0
0

With many IoT solutions connecting thousands of hardware endpoints, fixing security issues or upgrading functionality becomes a challenging and expensive task. The ability to update devices is critical for any IoT solution since it ensures that your organization can respond rapidly to security vulnerabilities by deploying fixes. Azure IoT Hub provides many capabilities to enable developers to build device management processes into their solutions, such as device twins for synchronizing device configuration, and automatic device management to deploy configuration changes across large device fleet. We have previously blogged about how these features have been used to implement IoT device firmware updates.

Some customers have told us they need a turn-key IoT device update manager, so we are pleased to share a collaboration with Mender to showcase how IoT devices connected to Azure can be remotely updated and monitored using Mender open source update manager. Mender provides robust over-the-air (OTA) update management via full image updates and dual A/B partitioning with roll-back, managed and monitored through a web-based management UI.  Customers can use Mender for updating Linux images that are built with Yocto. By integrating with Azure IoT Hub Device Provisioning Service, IoT device identity credentials can be shared between Mender and IoT Hub which is accomplished using a custom allocation policy and an Azure Function. As a result, operators can monitor IoT device states and analytics through their solution built with Azure IoT Hub, and then assign and deploy updates to those devices in Mender because they share device identities.

Recently, Mender’s CTO Eystein Stenberg came on the IoT Show to show how it works:

Keeping devices updated and secure is important for any IoT solution, and Mender now provides a great new option for Azure customers to implement OTA updates.

Additional resources

•    See Mender’s blog post on how to integrate IoT Hub Device Provisioning Service with Mender
•    Learn more about automatic device management in IoT Hub

Azure Marketplace new offers – Volume 38

$
0
0

We continue to expand the Azure Marketplace ecosystem. For this volume, 121 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Applications

AGIR Seguranca Cibernetica

AGIR Segurança Cibernética - Certificação PCI DSS: Obtain PCI DSS certification by performing gaps analysis, generating reports with action plans, consulting for correction of nonconformities, conducting vulnerability scans and intrusion tests, and more.

AnatomyX

AnatomyX: By providing an immersive, anatomically accurate learning environment and offering collaborative learning experiences, AnatomyX is revolutionizing the way students learn anatomy.

Artig-io

Artig.io: Artigio is a turnkey solution for the digitalization and publication of museums' and libraries' collections. It offers proven technology based on Azure, unlimited data storage, unlimited users, and more.

Askdata

Askdata: Askdata enables business users to interact with data by leveraging the power of natural language – both text and voice. Askdata proactively delivers AI-driven insights fueled by users' preferences, behavior, or predictive algorithms

Augmented Reality Enterprise Cloud Platform

Augmented Reality Enterprise Cloud Platform: Imagination Park's Augmented Reality mobile technology allows you to integrate the digital world into the real world in minutes. Create captivating AR content, branding, marketing, and sales campaigns that drive results.

Better Platform

Better Platform: Intended to serve as a health data platform based on community-sourced clinical data models, Better provides a clinical data repository for lifelong electronic patient health records used for care coordination and chronic disease management.

BIIS - Plataforma colaborativa de Transporte

BIIS - Plataforma Colaborativa de Transporte: BIIS offers logistic services for cargo transport where the shipper requests, quotes, and assigns trips to reliable carriers. This application is available only in Spanish.

BIM - Big Data Management

BIM - Big Data Management: Big Data Management helps companies in the three main phases of efficient data management: capture, processing, and visualization.

Blockchain as a Service

Blockchain as a Service: Blockchain as a Service for proxy voting allows central securities depositories, in conjunction with key stakeholders, to provide general meeting services with an easy, user-friendly, and secure tool for voting remotely.

Building

Building: Building is a web app that provides a snapshot of facilities and infrastructure to help manage buildings, goods, parts, staff, and support.

CAMPUS SQUARE

CampusSquare: CampusSquare creates an environment where students, teachers, and staff members share, collect, and provide necessary information quickly and accurately. This application is available only in Japanese.

Centos 7-4

Centos 7.4: Based on CentOS and provided by Northbridge Secure, this distribution of Linux is an optimal solution to deliver Azure cloud servers and applications to your devices of choice.

CipherCraftMail

CipherCraftMail: CipherCraftMail offers mail transmission prevention and attachment file encryption software that is compatible with Office 365. This application is available only in Japanese.

Climalytics

Climalytics: The Climalytics weather analytics solution automatically collects real-time weather data, such as temperature, precipitation, snow cover, and extreme weather data. Climalytics integrates with your data to help turn insights into action.

CLOMO MDM

CLOMO MDM: CLOMO MDM is a device management system offering secure central management of Windows, iOS, and Android devices. This application is available only in Japanese.

Cobrança Eletrônica Agil

Cobrança Eletrônica Ágil: The Agile Electronic Billing solution brings quick and easy agility and security to accounts receivable processes. This application is available only in Portuguese.

ConnectedMagiX

ConnectedMagiX: The ConnectedMagiX platform is designed to help bring your customers' location and identity together to create a new and effective channel of communication.

Contents Lifecycle Manager

Contents Lifecycle Manager: Centrally manage documents through various stages of their lifecycle, such as creation, updating, browsing, searching, storage, and discarding. This application is available only in Japanese.

DAISY

DAISY: DAISY is a business intelligence service that allows anyone to easily connect to data and then visualize and create interactive, sharable dashboards. This application is available only in Korean.

Data Warehouse as a Service

Data Warehouse as a Service: Nihilent helps you fully realize the value of data with Data Warehouse as a Service using the Azure data platform to quickly run complex queries across petabytes of data.

DocumentExplorer

DocumentExplorer: Simply browse, download, and work on your cloud files at remarkable speed, wherever you happen to be. DocumentExplorer is easy to use and optimized for touch interfaces.

eZagel

eZagel: eZagel is a messaging platform that brings together the speed of the internet and the ubiquity of the mobile phone to provide organizations with an interactive, cost-effective messaging platform.

Facture Cloud

Facture Cloud: Facture Cloud offers an easy-to-navigate web solution designed for SMEs. This solution is available only in Spanish in Colombia.

FatPipe MPVPN for Azure

FatPipe MPVPN for Azure: FatPipe's multi-line, multi-provider connectivity options enable customers to use any type of link for access to Azure-hosted applications.

Fraud Analytics

Fraud Analytics: Fraud Analytics combines technology and analytics techniques with human interaction to help detect and prevent potential fraudulent transactions.

FRENDS iPaaS

FRENDS iPaaS: Develop, manage, and secure all your integration and API needs with one simple solution.

GAKUEN

GAKUEN: Designed for the faculty of junior colleges, colleges, and universities, GAKUEN covers all aspects of institution management, administration, and business. This application is available only in Japanese.

Genetec Citigraf

Genetec Citigraf: Genetec Citigraf is a decision support system that unifies public safety operations across city departments, disseminates timely information, and provides greater situational awareness.

Genetec Retail Sense

Genetec Retail Sense: Genetec Retail Sense is an advanced consumer intelligence solution that empowers retailers by using existing security data to deliver insights and help transform customers’ in-store experience.

Genetec  Security Center

Genetec Security Center: Genetec Security Center is a unified platform that blends IP video surveillance, access control, automatic license plate recognition, intrusion detection, and communications in one intuitive, modular solution.

Genetec Stratocast

Genetec Stratocast: Genetec Stratocast is a cloud-based video monitoring system that lets you view live and recorded video from your laptop, tablet, or smartphone.

GigaSECURE Cloud 5-6-00 - Hourly 100 Pack

GigaSECURE Cloud 5.6.00: GigaSECURE Cloud delivers intelligent network traffic visibility for workloads running in Azure and enables increased security, operational efficiency, and scale across virtual networks.

GigaSECURE Cloud 5-6-00

GigaSECURE Cloud 5.6.00 Hourly (100 pack): GigaSECURE Cloud delivers intelligent network traffic visibility for workloads running in Azure and enables increased security, operational efficiency, and scale across virtual networks.

GIHS Gijima Integrated Health Solution

GIHS (Gijima Integrated Health Solution): GIHS provides enhanced patient safety and quality of care by giving clinicians secure, flexible access to electronic health record systems while safeguarding the privacy and confidentiality of patient data.

Glassbox

Glassbox: Glassbox empowers organizations to manage and optimize the digital lifecycle of their web and native mobile app customers.

GO1

GO1: GO1 brings all of the world’s top training providers under the same roof, providing you with a one-stop shop for learning.

Gosocket

Gosocket: Gosocket is an open business network that provides services and tools to facilitate the sending and receiving of electronic invoicing information.

Grapevine Orchestration Engine

Grapevine Orchestration Engine: The Grapevine Orchestration Engine unites private and public sector healthcare and research institutions along with technology companies in an effort to establish a global standardization for the exchange of healthcare data.

Haivision Media Gateway 3-0-1

Haivision Media Gateway 3.0.1: Haivision Media Gateway on Azure is ideal for transporting high quality, low-latency live video to multiple locations for corporate events and broadcast distribution.

HintEd

HintEd: HintEd integrates interactive hints and popups into any software to help guide users through the task they are trying to perform.

Horus Project

Horus Project: Horus Project allows company resources to allocate the hours they dedicate to each project and each task, as well as the associated expenses of travel, diets, transportation, and others. This application is available only in Spanish.

HPC

HPC: Use the HPC cluster system if your computing resources have slowed but you have no budget to buy new compute nodes, or if you want to do parallel computing with additional resources. This application is available only in Japanese.

hubtobee

hubtobee: hubtobee surfs employees’ daily business trips to point out rewarding opportunities for networking that might otherwise be invisible and left to chance.

IncrediBuild Cloud Agent

IncrediBuild Cloud Agent: IncrediBuild turns your network into a virtual supercomputer, harnessing idle CPU cycles from remote machines even while the machines are in use.

IncrediBuild Pure cloud

IncrediBuild Pure Cloud: This IncrediBuild Azure Marketplace instance provides customers who’d like to accelerate their workloads with IncrediBuild in a pure cloud environment a speedy way to onboard.

Info IoT

Info IoT: Connect and manage devices and gateways, analyze streaming data, and transform business processes with Info IoT.

Infovista Ipanema  SD-WAN

Infovista Ipanema SD-WAN: By defining the quality of experience you want for your business-critical apps, Ipanema controls each user session automatically and dynamically, regardless of network conditions.

infsoft LocAware platform

infsoft LocAware platform: infsoft offers solutions for indoor navigation, asset tracking, location analytics, and process automation. In addition to indoor localization systems, infsoft focuses on geo-based assistance systems with analysis and tracking functionalities.

Inline Insight

Inline Insight: Inline Insight offers marketers from all industries a master view to marketing performance across ad channels and campaigns.

Instart Digital Experience Cloud

Instart Digital Experience Cloud: Instart DX Cloud provides continuous insights, context-aware control, and AI-driven optimizations to ensure the best performance, security, and profitability of a website without requiring manual programming or expensive consultants.

Instec Policy

Instec Policy: Configurable and scalable, Instec's policy software backbone includes always-up-to-date bureau content and access to a library of custom rates, rules, and insurance forms so that new products never start from scratch.

Instec Underwriting

Instec Underwriting: Instec Underwriting streamlines the underwriting process and drives greater collaboration among underwriters, agents, and insureds, resulting in lower costs, better insurance underwriting decisions, and a more responsive customer experience.

Instinct Orchestration

Instinct Orchestration: Orchestration enables you and your team to make better informed, more accurate decisions during onboarding. The module combines information from a wealth of data sources to build a clear risk and opportunity profile for each applicant.

Intellicus BI Server V18.1 5 Users

Intellicus BI Server V18.1 (5 Users): With Intellicus BI Server, you can connect multiple, diverse data sources to bring all your data onto one platform; design interactive, visually rich reports and dashboards; and perform a 360-degree analysis on your business data.

Intellicus BI Server V18.1 10 Users

Intellicus BI Server V18.1 (10 Users): With Intellicus BI Server, you can connect multiple, diverse data sources to bring all your data onto one platform; design interactive, visually rich reports and dashboards; and perform a 360-degree analysis on your business data.

Intellicus BI Server V18.1 25 Users

Intellicus BI Server V18.1 (25 Users): With Intellicus BI Server, you can connect multiple, diverse data sources to bring all your data onto one platform; design interactive, visually rich reports and dashboards; and perform a 360-degree analysis on your business data.

Intelligent Annotator

Intelligent Annotator: Intelligent Annotator is a web-based service-oriented image object annotator environment, which makes it easier to label objects on images using smart structuring of image sets and does not require professional knowledge in development.

Intergen App Foundry

Intergen App Foundry: Intergen App Foundry provides rapid creation of tailored, fully responsive web applications for any business process.

InterSystems IRIS Evaluation Edition

InterSystems IRIS Evaluation Edition: The InterSystems IRIS Evaluation Edition is a special version of the InterSystems IRIS Data Platform that has additional data, demos, and sample code included to provide you with the best evaluation experience.

IoT-nxt Energy Management

IoT.nxt Energy Management: The IoT.nxt solution enhances existing ecosystems and enables companies to experience a powerful resurgence with compelling benefits.

Irion EDM Platform and RTG

Irion EDM Platform & RTG: Irion EDM is a complete end-to-end enterprise data management platform including business glossary, metadata dictionary, data modeling, source connectivity, data discovery, data masking engine, rule engine, analytical engine, data delivery, and more.

Irion GDPR Suite

Irion GDPR Suite: Irion offers a complete module solution to manage both the GDPR intrinsic complexity and overall privacy legislation.

IRIS Diabetic Retinopathy Diagnostic Solution

IRIS Diabetic Retinopathy Diagnostic Solution: Our FDA Type II clearance provides an end-to-end diagnostic solution for early detection of disease, from patient identification to reimbursement and referral.

Ithium100 - Blockchain Supply Finance chain app

Ithium100 - Blockchain Supply Finance chain app: Ithium provides a SaaS based on blockchain that allows big supply chains to access funds as soon as the budget gets approved.

Izenda

Izenda Standalone VM: Izenda is an application-based intelligence provider that brings critical insights to end users across industries.

Jamcracker CSB Service Provider Version 7-0-2

Jamcracker CSB Service Provider Version 7.0.2: This solution automates order management, provisioning, and billing and can be easily integrated to support enterprise ITSM, billing, ERP, and identity systems including Microsoft AD and ADFS.

JAMS V7 BYOL - Server 2016

JAMS V7 (BYOL) - Server 2016: JAMS is an enterprise batch scheduling and workload automation solution. Define, manage, and monitor jobs through a GUI, REST or .NET API, or PowerShell cmdlets.

JumpStart Academy Math

JumpStart Academy Math: JumpStart Academy Math is an award-winning online individualized math program that provides teachers with engaging kindergarten through 5th-grade lessons for their students.

Kbot Virtual Assistant for 0365

Kbot Virtual Assistant for O365: Konverso provides an off-the-shelf enterprise virtual support assistant for Office 365 on Azure.

KeenCorp Index

KeenCorp Index: The KeenCorp Index enables companies to avoid confirmation bias and manage for success before any blind spots become liabilities or missed opportunities.

Laso Intelligence Engine

Laso Intelligence Engine: The Laso Intelligence Engine is a fully automated lending platform that uses machine learning for highly predictive risk analytics, which can be easily tailored to service a larger segment of the small business borrower population.

LCPtracker Professional

LCPtracker Professional: LCPtracker is powerful, cloud-based software used to collect, verify, and manage all your contractors' prevailing wage certified payrolls and related labor compliance documentation.

LDAP Manager on Azure

LDAP Manager on Azure: Improve the operational efficiency of ID information and strengthen the security of your organization. This application is available only in Japanese.

LegalSim-Games

LegalSim.Games: This legal workflow simulation games platform is available only in Russian.

Lexplore - AI-based literacy screening method

Lexplore - AI-based literacy screening method: Lexplore is an innovative rapid reading assessment powered by eye-tracking and artificial intelligence technologies.

Liberatii Gateway for Oracle Apps

Liberatii Gateway for Oracle Apps: Liberatii Gateway connects applications developed for Oracle database to Azure SQL.

LiveTiles Design

LiveTiles Design: LiveTiles Design is an all-in-one solution for creating beautiful, engaging sites that foster collaboration throughout your organization, effectively managing content and seamlessly integrating third-party applications in a single platform.

MagiXBill

MagiXBill: MagiXBill is an efficient and reliable call accounting software package for monitoring and reporting telephony activity.

MATLAB Parallel Server (BYOL)

MATLAB Parallel Server (BYOL): This MathWorks-developed reference architecture for Azure incorporates best practices to let you quickly create, configure, and deploy a cluster with MATLAB Parallel Server and MATLAB Job Scheduler.

MATLAB Production Server (BYOL)

MATLAB Production Server (BYOL): This MathWorks-developed reference architecture for Azure incorporates best practices to let you quickly create, configure, and deploy a MATLAB Production Server environment.

MiBI

MiBI: MiBI is an integrated financial planning and reporting solution that provides a holistic view of your organization.

Miracle Mobile Forms

Miracle Mobile Forms: Miracle Mobile Forms helps today’s forward-thinking enterprises overcome the high costs, delays, and potential errors related to paper forms.

moducoding

moducoding: moducoding provides a SaaS platform for retraining and coding tests/interviews for developer hiring. This application is available only in Korean.

Moovit MaaS

Moovit MaaS: Moovit’s Mobility-as-a-Service (MaaS) platform provides a complete software solution for governments, cities, and municipalities that wish to offer MaaS to their citizens.

MOVEit Automation

MOVEit Automation: MOVEit Automation is a managed file transfer automation solution that provides secure, easy-to-use automated workflows without any programming skills.

MQcentral

MQcentral: MQcentral is a cloud-based, fully integrated gateway and device management platform, enabling users to provision devices and gateways on the MachineQ IoT management platform, run diagnostics, manage users, and set custom notifications.

myCloudInstant

myCloudInstant: myCloudInstant automates the deployment, installation, and configuration of SAP solutions on Azure.

Netas Cloud Monitoring

Netas Cloud Monitoring: Netas Cloud Monitoring enables customers to automate and monitor the performance of critical on-premises and cloud workloads. This application is available only in Turkish.

Nimbulis

Nimbulis: Nimbulis consolidates and provides visibility into how information is being leveraged across an organization by bringing together people and the activities around workflows.

NUADU

NUADU: NUADU is a formative, summative, and normative assessment platform deeply driven by data that helps identify students’ learning gaps and then provides content and tools to bridge the gaps effectively.

Nucleus-io

Nucleus.io: Nucleus.io is a medical image management platform that provides secure access to images in the cloud.

Omnichannel Marketing Analytics

Omnichannel Marketing Analytics: Omnichannel Marketing Analytics integrates with your existing environments and marketing systems to bring together your disparate data sources, including web traffic, paid media, social media, traditional media, CRM, and sales.

OmniSci Enterprise Edition 4

OmniSci Enterprise Edition 4: OmniSci (formerly MapD) harnesses the massive parallel computing of GPUs for breakthrough performance at scale.

Outcome Data Platform

Outcome Data Platform: LynxCare's platform enables hospitals and clinics to transform their existing data into insights without the need for double data entry.

Overlay

Overlay+: Overlay+ helps corporate banks and their customers securely collect, index, and exchange information and documentation for transactions.

PowerLine Azure Service

PowerLine Azure Service: Upgrade your Microsoft Access and other desktop applications to Azure with no coding.

Prisma by Mobilise IT

Prisma by Mobilise IT: Prisma’s real-time administration tools put control back in the hands of IT to continuously monitor, improve, and evolve service as business requirements change.

ProductOne

ProductOne: ProductOne is intended for organizations and specialists who are engaged in software development based on Blockchain technology.

ProgOffice_Enterprise

ProgOffice_Enterprise: ProgOffice Enterprise is a communication portal and cloud phonebook service that enables you to combine the information you want into one phonebook application. This application is available only in Japanese.

ReAccess

ReAccess: ReAccess provides easy access to your Azure cloud data and services as well as conversion services for Microsoft Access and other similar databases.

Refresh non-productive SAP systems easily

Refresh non-productive SAP systems easily: Libelle SystemCopy is a software solution that automates SAP system and landscape copies.

ReWeb

ReWeb: ReWeb provides easy access to your Azure cloud data and services as well as conversion services for Microsoft Access and other similar databases.

RF Campus

RF Campus: RF Campus is a campus management system/student information system that enables educational institutions to automate and streamline their processes.

Sensin

Sensin: Understand the behavior of customers who visit physical spaces in real time.

shelfie

Shelfie: Shelfie uses computer vision to understand the shelf environment and alert staff about gaps. Shelfie knows what the “perfect shelf” looks like and lets your staff know what needs to be fixed.

Shelfr

Shelfr: Shelfr is an in-store solution that provides FMCGs, CPGs, distributors, and retailers easy-to-use tools to plan store-specific planograms, track execution, and inspire action.

Shopper Insights

Shopper Insights: Shopper Insights is a cloud-based platform that provides dynamic customer segmentation (personas), strategic insights, and actionable recommendations to retailers, malls, and brands.

SmartCharge

SmartCharge: By combining innovative, cutting-edge software and hardware solutions, SmartCharge solves the challenges of urban e-mobility.

Söze - Intelligence and Insights at Scale

Söze - Intelligence and Insights at Scale: Söze delivers more investigative avenues by accelerating the work currently handled by manual processes. It works by discovering information of potential interest to police, including relationships between information.

SRI - Small Retail Insights

SRI - Small Retail Insights: Small Retail Insights helps industries visualize the performance of products at neighborhood markets. This application is available only in Portuguese.

staicy

staicy: staicy is a digital health data management platform that helps those in the medical industry to go from data to knowledge by managing, analyzing, correlating, and integrating health and research data.

StatRad

StatRad: StatRad is a leading teleradiology service for hospitals and radiology groups across the United States.

Time Management System

Time Management System: Make every second count with a user-friendly time and attendance solution that will help your organization accurately and efficiently manage people’s time, improve productivity, and more.

tno endeavor

tno endeavor: tno endeavor is a simple-to-use, secure, cost-effective solution supporting nonprofits that provide autism services, case management, counseling, crisis services, family services, veteran services, and other social service programs.

Unifed Source to Pay built natively on Azure

Unified Source to Pay built natively on Azure: SMART by GEP delivers comprehensive spend, sourcing, and procurement functionality in a single, unified platform for direct and indirect spend management.

VContact

VContact: VContact is a real-time speech recognition AI solution for call centers. This application is available only in Japanese.

Warranty Analytics

Warranty Analytics: The SightN2 for Warranty Analytics solution automatically ingests heterogeneous claims data from disparate sources to aggregate and analyze trends in warranty claims and processing.

WEP - Warehouse Efficiency and Productivity

WEP - Warehouse Efficiency and Productivity: WEP is a warehouse management system that optimizes, automates, measures, and controls your storage operations and distribution centers. This application is available only in Spanish.

Where You Love

Where You Love: Where You Love identifies sellers at the beginning of their real estate project and sends you their detailed information.

Consulting services

CD-DevOps Consulting Services - 8-wk Imp

CD/DevOps Consulting Services - 8-wk Imp: This consulting service helps enterprises align development and operations to achieve higher efficiency and faster time to market and to increase profitability while reducing the cost of software assets using Azure DevOps.

SAP to Azure Migration - 1-Week Implementation

SAP to Azure Migration: 1-Week Implementation: Infopulse uses the most advantageous migration scenario matching your business objectives to deliver enhanced agility, automated administration, lower costs, less complexity, and higher availability.

Three things to know about Azure Machine Learning Notebook VM

$
0
0

Data scientists have a dynamic role. They need environments that are fast and flexible while upholding their organization’s security and compliance policies.

Three things to know about Azure Machine Learning Notebook VM

Data scientists working on machine learning projects need a flexible environment to run experiments, train models, iterate models, and innovate in. They want to focus on building, training, and deploying models without getting bogged down in prepping virtual machines (VMs), vigorously entering parameters, and constantly going back to IT to make changes to their environments. Moreover, they need to remain within compliance and security policies outlined by their organizations.

Organizations seek to empower their data scientists to do their job effectively, while keeping their work environment secure. Enterprise IT pros want to lock down security and have a centralized authentication system. Meanwhile, data scientists are more focused on having direct access to virtual machines (VMs) to tinker at the lower level of CUDA drivers and special versions of the latest machine learning frameworks. However, direct access to the VM makes it hard for IT pros to enforce security policies. Azure Machine Learning service is developing innovative features that allow data scientists to get the most out of their data and spend time focusing on their business objectives while maintaining their organizations’ security and compliance posture.

Azure Machine Learning service’s Notebook Virtual Machine (VM), announced in May 2019, resolves these conflicting requirements while simplifying the overall experience for data scientists. Notebook VM is a cloud-based workstation created specifically for data scientists. Notebook VM based authoring is directly integrated into Azure Machine Learning service, providing a code-first experience for Python developers to conveniently build and deploy models in the workspace. Developers and data scientists can perform every operation supported by the Azure Machine Learning Python SDK using a familiar Jupyter notebook in a secure, enterprise-ready environment. Notebook VM is secure and easy-to-use, preconfigured for machine learning, and fully customizable.

Let’s take a look at how Azure Machine Learning service Notebook VMs are:

  1. Secure and easy-to-use
  2. Preconfigured for machine learning and,
  3. Fully customizable

1. Secure and easy to use

When a data scientist creates a notebook in standard infrastructure-as-a-service (IaaS) VM, it requires a lot of intricate, IT specific parameters. They need to name the VM and specify titles of images, security parameters (virtual network, subnet, and more), storage accounts, and a variety of other IT specific parameters. If incorrect parameters are given, or details are overlooked, this can open an organization up to serious security risks.

Compared to an IaaS VM, the Notebook VM creation experience has been streamlined, as it takes just two parameters – a VM name and a VM type. Once the Notebook VM is created it provides access to Jupyter and JupyterLab – two popular notebook environments for data science. The access to the notebooks is secured out-of-the-box with HTTPS and Azure Active Directory, which makes it possible for IT pros to enforce a single sign-on environment with strong security features like Multi-Factor Authentication, ensuring a secure environment in compliance with organizational policies.

Azure Machine Learning Notebook VM - //build2019 demo

2. Preconfigured for machine Learning

Setting up GPU drivers and deploying libraries on a traditional IaaS VM can be cumbersome and require substantial amounts of time. It can also get complicated finding the right drivers for given hardware, libraries, and frameworks. For instance, the latest versions of PyTorch may not work with the drivers a data scientist is currently using. Installation of client libraries for services such as Azure Machine Learning Python SDK can also be time-consuming, and some Python packages can be incompatible with others, depending on the environment where they are installed.

Notebook VM has the most up-to-date, compatible packages preconfigured and ready to use. This way, data scientists can use any of the latest frameworks on Notebook VM without versioning issues and with access to all the latest functionality of Azure Machine Learning service. Inside of the VM, along with Jupyter and JupyterLab, data scientists will find a fully prepared environment for machine learning. Notebook VM draws its pedigree from Data Science Virtual Machine (DSVM), a popular IaaS VM offering on Azure. Similar to DSVM it comes equipped with preconfigured GPU drivers and a selection of ML and Deep Learning Frameworks.

Notebook VM is also integrated with its parent, Azure Machine Learning workspace. The notebooks that data scientists run on the VM have access to the data stores and compute resources of the workspace. The notebooks themselves are stored in a Blob Storage account of the workspace. This makes it easy to share notebooks between VMs, as well as keeps them safely preserved when the VM is deleted.

3. Fully customizable

In environments where IT pros prepare virtual machines for data scientists, there is a very vigorous process for this preparation and limitations on what can be done on these machines. Alternatively, data scientists are very dynamic and need the ability to customize VMs to fit their ever-changing needs. This often means going back to IT pros to have them make the necessary changes to the VMs. Even then, data scientists hit blockers when iterations don’t meet their needs or take too long. Some data scientists will resort to using their personal laptop to run jobs their corporate VMs don’t support, breaking compliance policies and putting the organization at risk.

While Notebook VM is a managed VM offering, it retains full access to hardware capabilities. Data scientists can create VMs of any type, all supported by Azure. This way they can customize it to their heart’s desire by adding custom packages and drivers. For example, data scientists can quickly create the latest NVidia V100 powered VM to perform step-by-step debugging of novel neural network architectures.

Get started

If you are working with code, Notebook VM will offer you a smooth experience. It includes a set of tutorials and samples which make every capability of the Azure Machine Learning service just one-click away. Give it a try and let us know your feedback.

Learn more about the Azure Machine Learning service. Get started with a free trial of the Azure Machine Learning service.

Migrating a Sample WPF App to .NET Core 3 (Part 1)

$
0
0

Olia recently wrote a post about how to port a WinForms app from .NET Framework to .NET Core. Today, I’d like to follow that up by walking through the steps to migrate a sample WPF app to .NET Core 3. Many of these steps will be familiar from Olia’s post, but I’ve tried to differentiate this one by including some additional common dependencies that users are likely to run into like WCF client usage or third-party UI packages.

To keep this post from being too long, I will split it into two parts. In this first part, we’ll prepare for migration and create a new csproj file for the .NET Core version of the app. In the second post, we’ll make the actual code changes necessary to get the app working on .NET Core.

These posts don’t focus on any one particular porting issue. Instead, they’re meant to give an overview of the steps needed to port a sample WPF app. If there are particular .NET Core migration topics you’d like a deeper look at, let us know in the comments.

Video walkthrough

If you would prefer a video demonstration of migrating the application, I have posted a series of YouTube videos of me porting the app.

About the sample

For this exercise, I wrote a simple commodity trading app called ‘Bean Trader’. Users of the app have accounts with different numbers of beans (which come in four different colors). Using the app, users can propose and accept trades with other users. The app isn’t particularly large (~2,000 lines of code), but is meant to be a step up from ‘Hello World’ in terms of complexity so that we can see some issues users may encounter while porting real applications.

Bean Trader Sample App

Interesting dependencies in the app include:

  • WCF communication with a backend trading service via a duplex NetTcp channel
  • UI styling and dialogs from MahApps.Metro
  • Dependency injection with Castle.Windsor (though, of course, many DI solutions – including Microsoft.Extensions.DependencyInjection – could be used in this scenario)
  • App settings in app.config and the registry
  • A variety of resources and resx files

The app source is available on GitHub in case you want to follow along with this blog post. The original source (prior to porting) is available in the NetFxBeanTraderClient directory. The final, ported application is in the NetCoreBeanTraderClient directory. The backend service, which the app needs to communicate with, is available in the BeanTraderServer folder.

Disclaimer

Keep in mind that this sample app is meant to demonstrate .NET Core porting challenges and solutions. It’s not meant to demonstrate WPF best practices. In fact, I’ve deliberately included some anti-patterns in the app to make sure we encounter at least a couple interesting challenges while porting.

Migration process overview

The migration process from .NET Framework to .NET Core consists of four major steps.
.NET Core Migration Process

  1. First, it is useful to prepare for the migration by understanding the project’s dependencies and getting the project into an easily portable state.
    1. This includes using tools like the .NET Portability Analyzer to understand .NET Framework dependencies.
    2. It also includes updating NuGet references to use the <PackageReference> format and, possibly, updating NuGet package versions.
  2. Second, the project file needs to be updated. This can be done either by creating a new project file or by modifying the current one in-place.
  3. Third, the source code may need some updates based on different API surface areas either in .NET Core or in the .NET Core versions of required NuGet packages. This is typically the step that takes the longest.
  4. Fourth, don’t forget to test the migrated app! Some .NET Core/.NET Framework differences don’t show up until runtime (though there are Roslyn code analyzers to help identify those cases).

Step 1: Getting ready

Have the sample cloned and ready to go? Great; let’s dive in!

The primary challenge with migrating a .NET Framework app to .NET Core is always that its dependencies may work differently (or not work at all!) on .NET Core. Migration is much easier than it used to be – many NuGet packages now target .NET Standard and, starting with .NET Core 2.0, the .NET Framework and .NET Core surface areas have become quite similar. Even so, some differences (both in support from NuGet packages and in available .NET APIs) remain.

Upgrade to <PackageReference> NuGet references

Older .NET Framework projects typically list their NuGet dependencies in a packages.config file. The new SDK-style project file format references NuGet packages differently, though. It uses <PackageReference> elements in the csproj file itself (rather than in a separate config file) to reference NuGet dependencies. Fortunately, old-style csproj files can also use this more modern syntax.

When migrating, there are two advantages to using <PackageReference>-style references:

  1. This is the style of NuGet reference that will be required for the new .NET Core project file. If you’re already using <PackageReference>, those project file elements can be copied and pasted directly into the new project.
  2. Unlike a packages.config file, <PackageReference> elements only refer to the top-level dependencies that your project depends on directly. All other transitive NuGet packages will be determined at restore time and recorded in the autogenerated objproject.assets.json file. This makes it much easier to reason about what dependencies your project has, which is useful when determining whether the necessary dependencies will work on .NET Core or not.

So, the first step to migrating the Bean Trader sample is to migrate it to use <PackageReference> NuGet references. Visual Studio makes this simple. Just right-click on the project’s packages.config file in Visual Studio’s
solution explorer and select ‘Migrate packages.config to PackageReference’.
Upgrading to PackageReference
A dialog will appear showing calculated top-level NuGet dependencies and asking which other NuGet packages should be promoted to top-level. None of these other packages need to be top-level for the Bean Trader sample, so you can uncheck all of those boxes. Then, click ‘Ok’ and the packages.config file will be removed and <PackageReference> elements will be added to the project file.

<PackageReference>-style references don’t store NuGet packages locally in a ‘packages’ folder (they are stored globally, instead, as an optimization) so, after the migration completes, you will need to edit the csproj file and remove the <Analyzer> elements referring to the FxCop analyzers that previously came from the ..packages directory. Don’t worry – since we still have the NuGet package reference, the analyzers will be included in the project. We just need to clean up the old packages.config-style <Analyzer> elements.

Review NuGet packages

Now that it’s easy to see the top-level NuGet packages our project depends on, we can review whether those packages will be available on .NET Core or not.

You can know whether a package supports .NET Core by looking at its dependencies on nuget.org. The community-created fuget.org site also shows this information prominently at the top of the package information page.

When targeting .NET Core 3, any packages targeting .NET Core or .NET Standard should work (since .NET Core implements the .NET Standard surface area). You can use packages targeting .NET Framework, as well, but that introduces some risk. .NET Core to .NET Framework dependencies are allowed because .NET Core and .NET Framework surface areas are similar enough that such dependencies often work. However, if the package tries to use a .NET API that is not present in .NET Core, you will encounter a runtime exception. Because of that, you should only reference .NET Framework packages when no other options are available and understand that doing so imposes a test burden.

In the case of the Bean Trader sample, we have the following top-level NuGet dependencies:

  • Castle.Windsor, version 4.1.1. This package targets .NET Standard 1.6, so it will work on .NET Core.
  • Microsoft.CodeAnalysis.FxCopAnalyzers, version 2.6.3. This is a meta-package so it’s not immediately obvious which platforms it supports, but documentation indicates that its newest version (2.9.2) will work for both .NET Framework and .NET Core.
  • Nito.AsyncEx, version 4.0.1. This package does not target .NET Core, but the newer 5.0 version does. This is common when migrating because many NuGet packages have added .NET Standard support over the last year or so, but older projects will be using older versions of these packages. If the version difference is only a minor version difference, it’s often easy to upgrade to the newer version. Because this is a major version change, we will need to be cautious upgrading since there could be breaking changes in the package. There is a path forward, though, which is good.
  • MahApps.Metro, version 1.6.5. This package, also, does not target .NET Core, but has a newer pre-release (2.0-alpha) that does. Again, we will have to look out for breaking changes, but the newer package is encouraging.

The Bean Trader sample’s NuGet dependencies all either target .NET Standard/.NET Core or have newer versions that do, so there are unlikely to be any blocking issues here.

If there had been packages that didn’t target .NET Core or .NET Standard, we would have to think about other alternatives:

  • Are there other similar packages that can be used instead? Sometimes NuGet authors publish separate ‘.Core’ versions of their libraries specifically targeting .NET Core, so we could search for those. Enterprise Library packages are an example of the community publishing “.NetCore”-suffixed alternatives.
  • If no alternatives are available, we can proceed using the .NET Framework-targeted packages, bearing in mind that we will need to test them thoroughly once running on .NET Core.

Upgrade NuGet packages

If possible, it would be good to upgrade the versions of these packages to ones that target .NET Core or .NET Standard at this point to discover and address any breaking changes early.

If you would rather not make any material changes to the existing .NET Framework version of the app, this can wait until we have a new project file targeting .NET Core. If possible, though, upgrading the NuGet packages to .NET Core-compatible versions ahead of time makes the migration process even easier once it comes time to create the new project file and reduces the number of differences between the .NET Framework and .NET Core versions of the app.

In the case of the Bean Trader sample, all of the necessary upgrades can be made easily (using Visual Studio’s NuGet package manager) with one exception: upgrading from MahApps.Metro 1.6.5 to 2.0 reveals breaking changes related to theme and accent management APIs.

Ideally, the app would be updated to use the newer version of the package (since that is more likely to work on .NET Core). In some cases, though, that may not be feasible. In this case, I’m not going to upgrade MahApps.Metro becase the necessary changes are non-trivial and this walkthrough is supposed to focus on migrating to .NET Core 3, not to MahApps.Metro 2. Also, this is a low-risk .NET Framework dependency because the Bean Trader app only exercises a small part of MahApps.Metro. It will, of course, require testing to make sure everything’s working once the migration is complete. If this were a real-world scenario, I would file an issue to track the work to move to MahApps.Metro version 2.0 since not doing the migration now leaves behind some technical debt.

Once the NuGet packages are updated to recent versions, the <PackageReference> item group in the Bean Trader sample’s project file should look like this:

.NET Framework portability analysis

Now that we feel good about the state of the project’s NuGet dependencies, let’s consider .NET Framework API dependencies. The .NET Portability Analyzer tool is useful for understanding which of the .NET APIs your project uses are available on other .NET platforms.

The tool comes as a Visual Studio plugin, a command line tool, or wrapped in a simple GUI which simplifies its options and always reports on .NET Core 3 compatibility.

In Olia’s previous post, she used the GUI, so I’ll use the command line interface for variety. The necessary steps are:

  1. Download the API Portability Analyzer if you don’t already have it.
  2. Make sure the .NET Framework app to be ported builds successfully (this is a good idea prior to migration anyhow!).
  3. Run API Port with a command line like this:
    1. ApiPort.exe analyze -f <PathToBeanTraderBinaries> -r html -r excel -t ".NET Core"
    2. The -f argument specifies the path containing the binaries to analyze. The -r argument specifies which output file format you want. I find both HTML and Excel outputs useful. The -t argument specifies which .NET platform we are analyzing API usage against. In this case, we want .NET Core since that’s the platform we are moving to. Since no version is specified, API Port will default to the latest version of the platform (.NET Core 3.0 in this case).

When you open the HTML report, the first section will list all of the binaries that were analyzed and what percentage of the .NET APIs they use are available on the targeted platform. The percentage is not very meaningful by itself. What’s more useful is to see the specific APIs that are missing. To do that, either click an assembly name or scroll down to the reports for individual assemblies.

You only need to be concerned about assemblies that you own the source code for. In the Bean Trader ApiPort report, there are a lot of binaries listed, but most of them belong to NuGet packages. Castle.Windsor, for example, shows that it depends on some System.Web APIs that are missing in .NET Core. This isn’t a concern, though, because we previously verified that Castle.Windsor supports .NET Core. It is common for NuGet packages to have different binaries for use with different .NET platforms, so whether the .NET Framework version of Castle.Windsor uses System.Web APIs or not is irrelevant as long as the package also targets .NET Standard or .NET Core (which it does).

In the case of the Bean Trader sample, the only binary that we need to consider is BeanTraderClient and the report shows that only two .NET APIs are missing – System.ServiceModel.ClientBase<T>.Open and System.ServiceModel.ClientBase<T>.Open
BeanTraderClient portability report

These are unlikely to be blocking issues because WCF Client APIs are (mostly) supported on .NET Core, so there must be alternatives available for these central APIs. In fact, looking at System.ServiceModel‘s .NET Core surface area (using https://apisof.net), we see that there are async alternatives in .NET Core instead.

Based on this report and the previous NuGet dependency analysis, it looks like there should be no major issues migrating the Bean Trader sample to .NET Core. We’re ready for the next step in which we’ll actually start the migration.

Migrating the project file

Because .NET Core uses the new SDK-style project file format, our existing csproj file won’t work. We’re going to need a new project file for the .NET Core version of the Bean Trader app. If we didn’t need to build the .NET Framework version of the app going forward, we could just replace the existing csproj file. But, often, developers want to be able to build both versions – especially for the time being since .NET Core 3 is still in preview.

There are three options for where the new csproj file should live, each of which has its own pros and cons:

  1. We can use multi-targeting (specifying multiple <TargetFrameworks> targets) to have a single project file that builds both .NET Core and .NET Framework versions of the solution. In the future, this will probably be the best option. Currently, though, a number of design-time features don’t work well with multi-targeting. So, for now, the recommendation is to have separate project files for .NET Core and .NET Framework-targeted versions of the app.
  2. We can put the new project file in a different directory. This makes it easy to keep build output separate, but means that we won’t be taking advantage of the new project system’s ability to automatically include C# and XAML files. Also, it will be necessary to include <Link> elements for XAML resources so that they are embedded with correct paths.
  3. We can put the new project file in the same directory as the current project file. This avoids the issues of the previous option but will cause the obj and bin folders for the two projects to conflict. If you only open one of the projects at a time, this shouldn’t be an issue. But if they will both be open simultaneously, you will need to update the projects to use different output and intermediate output paths.

I prefer option 3 (having the project files live side-by-side), so I will use that approach for this porting sample.

To actually create the new project file, I usually use a dotnet new wpf command in a temporary directory to generate the project file and then copy/rename it to the correct location. There is also a community-created tool CsprojToVs2017 that can automate some of the migration. In my experience, the tool is helpful but still needs a human to review the results to make sure all the details of the migration are correct. One particular area that the tool doesn’t handle optimally is migrating NuGet packages from packages.config files. If the tool runs on a project file that still uses a packages.config file to reference NuGet packages, it will migrate to <PackageReference> elements automatically, but will add <PackageReference> elements for all of the packages instead of just for top-level ones. If you have already migrated to <PackageReference> elements with Visual Studio, though (as we have done in this case), then the tool can help with the rest of the conversion. Like Scott Hanselman recommends in his blog post on migrating csproj files, I think porting by hand is educational and will give better results if you only have a few projects to port. But if you are porting dozens or hundreds of project files, then a tool like CsprojToVs2017 can be a big help.

So, if you’re following along at home, run dotnet new wpf in a temporary directory and move the generated csproj file into the BeanTraderClient folder and rename it BeanTraderClient.Core.csproj.

Because the new project file format automatically includes C# files, resx files, and XAML files that it finds in or under its directory, the project file is already almost complete! To finish the migration, I like to open the old and new project files side-by-side and look through the old one seeing if any information it contains needs to be migrated. In this case, the following items should be copied to the new project:

  • The <RootNamespace>, <AssemblyName>, and <ApplicationIcon> properties should all be copied.
  • I also need to add a <GenerateAssemblyInfo>false</GenerateAssemblyInfo> property to the new project file since the Bean Trader sample includes assembly-level attributes (like [AssemblyTitle]) in an AssemblyInfo.cs file. By default, new SDK-style projects will auto-generate these attributes based on properties in the csproj file. Because we don’t want that to happen in this case (the auto-generated attributes would conflict with those from AssemblyInfo.cs), we disable the auto-generated attributes with <GenerateAssemblyInfo>.
  • Although resx files are automatically included as embedded resources, other <Resource> items like images are not. So, copy the <Resource> elements for embedding image and icon files. We can simplify the png references to a single line by using the new project file format’s support for globbing patterns: <Resource Include="***.png" />.
  • Similarly, <None> items will be included automatically, but they will not be copied to the output directory, by default. Because the Bean Trader project includes a <None> item that is copied to the output directory (using PreserveNewest behaviors), we need to update the automatically populated <None> item for that file, like this:
  • The Bean Trader sample includes a XAML file (Default.Accent.xaml) as Content (rather than as a Page) because themes and accents defined in this file are loaded from the file’s XAML at runtime, rather than being embedded in the app itself. The new project system automatically includes this file as a <Page>, of course, since it’s a XAML file. So, we need to both
    remove the XAML file as a page (<Page Remove="**Default.Accent.xaml" />) and add it as content:
  • Finally, add NuGet references by copying the <ItemGroup> with all the <PackageReference> elements. If we hadn’t previously upgraded the NuGet packages to .NET Core-compatible versions, we could do that now that the package references are in a .NET Core-specific project.

At this point, it should be possible to add the new project to the BeanTrader solution and open it in Visual Studio. The project should look correct in the solution explorer and dotnet restore BeanTraderClient.Core.csproj should successfully restore packages (with two expected warnings related to the MahApps.Metro version we’re using targeting .NET Framework).

This is probably a good breaking point between parts one and two of this article. In the second post, we’ll get the app building and running against .NET Core.

The post Migrating a Sample WPF App to .NET Core 3 (Part 1) appeared first on .NET Blog.

Migrating a Sample WPF App to .NET Core 3 (Part 2)

$
0
0

In part 1 of this blog series, I began the process of porting a sample WPF app to .NET Core. In that post, I described the .NET Core migration process as having four steps:
.NET Core Migration Process
We previously went through the first two steps – reviewing the app and its dependencies (including NuGet dependencies and a .NET Portability Analyzer report), updating NuGet package references, and migrating the project file. In this post, we’ll complete the migration by making the necessary code changes to get the app building and running against .NET Core 3.

Step 3: Fix build issues

The third step of the porting process is getting the project to build. If you try to run dotnet build on the sample project now (or build it in VS), there will be about 100 errors, but we’ll get them fixed up quickly.

System.ServiceModel references and Microsoft.Windows.Compatibility

The majority of the errors are due to missing System.ServiceModel types. These can be easily addressed by referencing the necessary WCF NuGet packages. An even better solution, though, is to use the Microsoft.Windows.Compatibility package. This metapackage includes a wide variety of .NET packages that work on .NET Core but that don’t necessarily work cross-platform. The APIs in the compatibility pack include APIs relating to WCF client, directory services, registry, configuration, ACLs, and more.

In most .NET Core 3 WPF and WinForms porting scenarios, it will be useful to reference the Microsoft.Windows.Compatibility package preemptively since it includes a broad set of APIs that are common in WPF and WinForms .NET Framework apps.

After adding the NuGet reference to Microsoft.Windows.Compatibility, only one build error remains!

Cleaning up unused files

The next build error we see in the sample refers to a bad interface implementation in OldUnusedViewModel.cs. The file name is a hint, but on inspecting we find that, in fact, this source file is incorrect. It didn’t cause issues previously because it wasn’t included in the original .NET Framework project. This sort of issue comes up frequently when migrating to .NET Core since SDK-style projects include all C# (and XAML) sources by default. Source files that were present on disk but not included in the old csproj get included now automatically.

For one-off issues like this, it’s easy to compare to the previous csproj to confirm that the file isn’t needed and then either <Compile Remove="" /> it or, if the source file isn’t needed anywhere anymore, delete it. In this case, it’s safe to just delete OldUnusedViewModel.cs.

If you have many source files that would need to be excluded this way, you can disable auto-inclusion of C# files by setting the <EnableDefaultCompileItems> property to false in the project file. Then, you can copy <Compile Include> items from the old project file to the new one in order to only build sources you intended to include. Similarly, <EnableDefaultPageItems> can be used to turn off auto-inclusion of XAML pages and <EnableDefaultItems> can control both with a single property.

A brief aside on multi-pass compilers

After removing the offending file, we re-build and get five errors. Didn’t we use to have one? Why did the number of errors go up? The C# compiler is a multi-pass compiler. This means that it goes through each source file twice* – first the compiler just looks at metadata and declarations in each source file and identifies any declaration-level problems. Those are the errors we’ve just fixed. Then, it goes through the code again to build the C# source into IL (those are this second set of errors that we’re seeing now).

* In reality, the C# compiler does more than just two passes (as explained in Eric Lippert’s blog on the topic), but the end result is that compiler errors for large code changes like this tend to come in two waves.

Third-party dependency fix-ups (Castle.Windsor)

The next set of errors we see are related to Castle.Windsor APIs. This may seem surprising since the .NET Core Bean Trader project is using the same version of Castle.Windsor as the .NET Framework-targeted project (4.1.1). The differences are because within a single NuGet package, there can be different libraries for use with different .NET targets. This allows the packages to support many different .NET platforms which may require different implementations. It also means that there may be small API differences in the libraries when targeting different .NET platforms.

In the case of the BeanTrader sample, we see the following issues that need fixed up:

  1. Castle.MicroKernel.Registration.Classes.FromThisAssembly is not available on .NET Core. There is, however, the very similar API Classes.FromAssemblyContaining available, so we can replace both uses of Classes.FromThisAssembly() with calls to Classes.FromAssemblyContaining(t) where t is the type making the call.
  2. Similarly, in Bootstrapper.cs, Castle.Windsor.Installer.FromAssembly.This is unavailable on .NET Core. Instead, that call can be replaced with FromAssembly.Containing(typeof(Bootstrapper)).

Updating WCF client usage

Having fixed the Castle.Windsor differences, the last remaining build errors in the .NET Core project are that BeanTraderServiceClient (which derives from DuplexClientBase) does not have Open or Close methods. This is not surprising since these are the missing APIs that were highlighted by the .NET Portability Analzyer at the beginning of this migration process.
Looking at BeanTraderServiceClient draws our attention to a larger issue, though. This WCF client was auto-generated by the Svcutil.exe tool.

WCF clients generated by Svcutil are only meant for use on .NET Framework. Solutions that use svcutil-generated WCF clients will need to regenerate .NET Standard-compatible clients for use with .NET Core. One of the main reasons the old clients won’t work is that they depend on app configuration for defining WCF bindings and endpoints. Because .NET Standard WCF APIs can work cross-platform (where System.Configuration APIs aren’t available), WCF clients for .NET Core and .NET Standard scenarios must define bindings and endpoints programmatically instead of in configuration.

In fact, any WCF client usage that depends on the <system.serviceModel> app.config section (whether created with Svcutil or manually) will need changed to work on .NET Core.

There are two ways to automatically generate .NET Standard-compatible WCF clients:

  • The dotnet-svcutil tool is a .NET Core CLI tool that generates WCF clients similar to how Svcutil worked previously.
  • Visual Studio can generate WCF clients using the WCF Web Service Reference option of its Connected Services feature.

Either approach works well. Alternatively, of course, you could write the WCF client code yourself. For this sample, I chose to use the Visual Studio Connected Service feature. To do that, right click on the BeanTraderClient.Core project in Visual Studio’s solution explorer and select Add -> Connected Service. Next, choose the WCF Web Service Reference Provider. This will bring up a dialog where you can specify the address of the backend Bean Trader web service and the namespace that generated types should use.
WCF Web Service Reference Connected Service dialog
After clicking the Finish button, a new ‘Connected Services’ node is added to the project and a Reference.cs file is added under that node containing the new .NET Standard WCF client for accessing the Bean Trader service. If you look at the GetEndpointAddress or GetBindingForEndpoint methods in that file, you will see that bindings and endpoints are now generated programmatically (instead of via app config).

Our project has new WCF client classes now (in Reference.cs), but it also still has the old ones (in BeanTrader.cs). There are two options at this point:

  1. If you don’t want to make any changes to the original .NET Framework version of the app, you can use a <Compile Remove="BeanTrader.cs" /> item in the .NET Core project’s csproj file so that the .NET Framework and .NET Core versions of the app use different WCF clients. This has the advantage of leaving the existing .NET Framework project unchanged, but has the disadvantage that code using the generated WCF clients may need to be slightly different in the .NET Core case than it was in the .NET Framework project, so you will likely need to use #if directives to conditionally compile some WCF client usage (creating clients, for example) to work one way when built for .NET Core and another way when built for .NET Framework.
  2. If, on the other hand, some code churn in the existing .NET Framework project is acceptable, you can remove BeanTrader.cs all together and add Reference.cs to the .NET Framework project. Because the new WCF client is built for .NET Standard, it will work in both .NET Core and .NET Framework scenarios. This approach has the advantage that the code won’t need to bifurcate to support two different WCF clients – the same code will be used everywhere. The drawback, of course, is that it involves changing the (presumably stable) .NET Framework project.

In the case of the Bean Trader sample, we can make small changes to the original project if it makes migration easier, so follow these steps to reconcile WCF client usage:

  1. Add the new Reference.cs file to the .NET Framework BeanTraderClient.csproj project using the ‘Add existing item’ context menu from the solution explorer. Be sure to add ‘as link’ so that the same file is used by both projects (as opposed to copying the C# file).
    Add Reference.cs as link
  2. Delete BeanTrader.cs from the BeanTraderClient.csproj project (which will also remove it from the .NET Core project).
  3. The new WCF client is very similar to the old one, but a number of namespaces in the generated code are different. Because of this, it is necessary to update the project so that WCF client types are used from BeanTrader.Service instead of BeanTrader.Model or without a namespace). Building BeanTraderClient.Core.csproj will help to identify where these changes need to be made. Fixes will be needed both in C# and in XAML source files.
  4. Finally, you will discover that there is an error in BeanTraderServiceClientFactory.cs because the available constructors for the BeanTraderServiceClient type have changed in the new client. It used to be possible to supply an InstanceContext argument (which we created using a CallbackHandler from the Castle.Windsor IoC container). The new constructors create new CallbackHandlers instead, though. There are, however, constructors in BeanTraderServiceClient‘s base type that match what we want. Since the auto-generated WCF client code all exists in partial classes, we can easily extend it. To do this, create a new file called BeanTraderServiceClient.cs (in the BeanTraderClient.csproj project so that it’s included in both the .NET Framework and the .NET Core proejcts) and add a partial class with that same name (using the BeanTrader.Service namespace). Then, add one constructor to the partial type as shown here:

With those changes made, we can create the WCF client instance with the same constructor as before and both projects will now be using a new .NET Standard-compatible WCF client. We can then change the Open call in TradingService.cs to use await OpenAsync, instead.

We also need to address the Close call in the same file. Since the Close method is called from a Dispose method, it would be nice to have a non-async version of the method to call (even though calling the async alternative would be harmless in this case). Fortunately, newer versions of System.ServiceModel.Primitives now include ClientBase<T>.Close. The latest stable version of the Microsoft.Windows.Compatibility package (as of the time of this blog post) includes version 4.4.1 of System.ServiceModel.Primitives, but by adding a direct package dependency on System.ServiceModel.Primitives version 4.5.3, it is possible to call ClientBase<T>.Close in the trading service’s Dispose method, as before.

With the WCF issues addressed, the .NET Core version of the Bean Trader sample now builds cleanly!

Making sure the .NET Framework project still builds

Before wrapping up the ‘build-time fixes’ step, there’s one more issue to address. Although BeanTraderClient.Core.csproj builds without errors, the original .NET Framework-targeted BeanTraderClient.csproj now has errors! The primary error is this:

Error: Your project does not reference ".NETFramework,Version=v4.7.2" framework. Add a reference to ".NETFramework,Version=v4.7.2" in the "TargetFrameworks" property of your project file and then re-run NuGet restore.

This is because both of the project files build to the same output and intermediate output paths and the .NET Core project’s project.assets.json file (generated by dotnet restore) is conflicting with the .NET Framework build. If we only worked on one of these projects at a time, this could be avoided by just cleaning the obj/ and bin/ folders when switching projects, but in this scenario we have both projects open in Visual Studio together.

The solution is to update the projects’ output and intermediate output paths to something based on project name. A challenge here is that setting <BaseIntermediateOutputPath> in the csproj files directly won’t work because SDK-style projects use the intermediate output path as part of the Sdk="Microsoft.NET.Sdk.WindowsDesktop" declaration before we have an opportunity to change it in the csproj. This problem is discussed in more detail in Microsoft/msbuild#1603.

Instead, we can use a Directory.Build.props file to set the intermediate output path for both projects at once (and, importantly, prior to the intermediate output path being used by the project’s SDK). To do this, add a file called Directory.Build.props in the BeanTraderClient folder with the following contents:

That will update BaseOutputPath and BaseIntermediateOutputPath for both projects to be under directories based on their project name (out/$(MSBuildProjectName)).

Finally, because C# files are generated in intermediate output paths and we don’t want the .NET Core project to compile the .NET Framework project’s temporary files, we need to add <Compile Remove="outBeanTraderClient***.cs" /> to BeanTraderClient.Core.csproj.

The solution (including both .NET Framework and .NET Core versions of the Bean Trader app) now builds successfully!

Step 4: Runtime testing

It’s easy to forget that migration work isn’t done as soon as the project builds cleanly against .NET Core. It’s important to leave time for testing the ported app, too.

Let’s try launching the newly-ported Bean Trader app and see what happens. The app doesn’t get very far before failing with the following exception:

System.Configuration.ConfigurationErrorsException: 'Configuration system failed to initialize'

Inner Exception
ConfigurationErrorsException: Unrecognized configuration section system.serviceModel.

This makes sense, of course. Remember that WCF no longer uses app configuration, so the old system.serviceModel section of the app.config file needs to be removed. The updated WCF client includes all of the same information in its code, so the config section isn’t needed anymore.

After removing the system.serviceModel section of app.config, the app launches but fails with another exception when a user signs in:

System.PlatformNotSupportedException: 'Operation is not supported on this platform.'

The unsupported API is Func<T>.BeginInvoke. As explained in dotnet/corefx#5940, .NET Core doesn’t support the BeginInvoke and EndInvoke methods on delegate types due to underlying remoting dependencies. I explained this issue (and its fix) in more detail in a previous blog post, but the gist is that BeginInvoke and EndInvoke calls should be replaced with Task.Run (or async alternatives, if possible). Applying the general solution here, the BeginInvoke call can be replaced with an Invoke call launched by Task.Run:

After removing the BeginInvoke usage, the Bean Trader app runs successfully on .NET Core!

BeanTrader running on .NET Core

All apps are different, of course, so the specific steps needed to migrate your own apps to .NET Core will vary. But I hope this example demonstrates the general workflow and the types of issues that can be expected. And, despite these posts’ length, the actual changes needed in the Bean Trader sample to make it work on .NET Core were fairly limited. Many apps migrate to .NET Core in this same way – with limited or even no code changes needed.

The post Migrating a Sample WPF App to .NET Core 3 (Part 2) appeared first on .NET Blog.

Customize object displays in the Visual Studio debugger YOUR way

$
0
0

Have you ever stared at objects in a debugger window and wished that you could view those objects by something other than their type?  I certainly have!  Expanding items to determine each one’s identity can become tiresome very fast. Ideally, it would be great to quickly locate them by a particular property value.  Luckily for us, Visual Studio has two not-so-well-known attributes known as DebuggerDisplay for managed users, and Natvis for native C++ users. These attributes let you customize how you view objects in debugger windows such as the Watch, Autos, Locals, and datatips!

Locals and DataTips windows with and without DebuggerDisplay attribute appended to code
Figure 1 – Locals and DataTips windows with and without DebuggerDisplay attribute appended to code

What is the DebuggerDisplay attribute?

By writing DebuggerDisplay syntax at the top of a class, you can choose what strings and properties you want at the top of each object node in debugger windows.  Besides displaying strings in debugger windows, adding curly brackets ({}) to the DebuggerDisplay attribute allows Visual Studio to display the value of a property or method that you specify. You can also add format specifiers to DebuggerDisplay in order to further change how values are displayed and formatted in the debugger windows. In Figure 2, DebuggerDisplay appends the format specifier “nq” (no quotes).  The resulting display shows the string property Title without the surrounding quotation marks.

Basic DebuggerDisplay syntax added to top of Book class
Figure 2 – Basic DebuggerDisplay syntax added to top of Book class

 

Locals window with above DebuggerDisplay syntax added to code
Figure 3 – Locals window with above DebuggerDisplay syntax added to code

 

One previous workaround for performing this task is overriding a class’s ToString() method.  In contrast, DebuggerDisplay controls how an item is displayed without overriding that method.  So, if you don’t want debugging-related content in your ToString() method (especially when that method is called in your actual program), DebuggerDisplay is the way to go!

 

Can I display expressions for each object in debugger windows?

There may be times when you want to display expressions in debugger windows.  Good news: you can display expressions using the DebuggerDisplay attribute!

 

Example of DebuggerDisplay attribute containing an expression
Figure 4 – Example of DebuggerDisplay attribute containing an expression

 

Locals window with above DebuggerDisplay syntax and added expression evaluation
Figure 5 – Locals window with above DebuggerDisplay syntax and added expression evaluation

 

Bad news: DebuggerDisplay expressions can cause additional issues when debugging your code. Potential issues include performance hits for large or complex expressions, compilation and runtime errors when the expression’s language differs from the language being debugged, and application state changes when expressions mutate properties.

 

Figure 6 - DebuggerDisplay attribute with Visual Basic-style ternary expression syntax added
Figure 6 – DebuggerDisplay attribute with Visual Basic-style ternary expression syntax added

 

Figure 7 - Runtime error received after using above Visual Basic-style syntax while debugging in C#
Figure 7 – Runtime error received after using above Visual Basic-style syntax while debugging in C#

 

But fear not! One way to reduce these potential issues with expressions is by creating a private property or method that returns the string of an executed expression and telling DebuggerDisplay to display that property.

 

Figure 8 - Creating a private property containing more complex expressions and formatting referenced by DebuggerDisplay
Figure 8 – Creating a private property containing more complex expressions and formatting referenced by DebuggerDisplay

 

Figure 9 - Creating a method containing more complex expressions and formatting referenced by DebuggerDisplay
Figure 9 – Creating a method containing more complex expressions and formatting referenced by DebuggerDisplay

 

What is the feature equivalent  to DebuggerDisplay for C++ users?

DebuggerDisplay is compatible with C#, F#, and Visual Basic, but if you’re debugging in C++, Natvis is a great alternative!  Though not as simple as adding syntax to the top of a class like DebuggerDisplay, adding a .natvis file to a project lets you customize how objects are displayed.

 

Figure 10 - Example of Natvis being used in Locals window
Figure 10 – Example of Natvis being used in Locals window

 

Right-click the C++ project node in Solution Explorer, select Add > New Item, and select Visual C++ > Utility > Debugger visualization file (.natvis).  The result is an XML file where you can control which properties are displayed while debugging.

 

Figure 11 - Example Natvis file corresponding to display shown above
Figure 11 – Example Natvis file corresponding to display shown above

 

To learn more about using Natvis while debugging C++ projects, check out the documentation.

 

These features are awesome and will save me lots of time!  How can I help share DebuggerDisplay and Natvis with others?

Fun fact: both DebuggerDisplay and Natvis attributes have been in Visual Studio for years!  These attributes are extremely useful to most developers but are still not as discoverable and well-known as they could be.  As a result, we are currently working to provide an easier method to discover these attributes better, and your feedback will help make this happen!  Please complete this survey which will give us insight in providing an improved experience when using these attributes.

The post Customize object displays in the Visual Studio debugger YOUR way appeared first on The Visual Studio Blog.

Lenovo unveils its new ThinkPad P Series portfolio


Azure Shared Image Gallery now generally available

$
0
0

At Microsoft Build 2019, we announced the general availability of Azure Shared Image Gallery, making it easier to manage, share, and globally distribute custom virtual machine (VM) images in Azure.

Shared Image Gallery provides a simple way to share your applications with others in your organization, within or across Azure Active Directory (AD) tenants and regions. This enables you to expedite regional expansion or DevOps processes and simplify your cross-region HA/DR setup.

Shared Image Gallery also supports larger deployments. You can now deploy up to a 1,000 virtual machine instances in a scale set, up from 600 with managed images.

Here is what one of our customers had to say about the feature:

“Shared Image Gallery enables us to build all our VM images from a single Azure DevOps pipeline and to deploy IaaS VMs from these images in any subscription in any tenant in any region, without the added complexity of managing and distributing copies of managed images or VHDs across multiple subscriptions or regions.”

– Stanley Merkx, an Infrastructure Engineer at VIVAT, a Netherlands based insurance company

Regional availability

Shared Image Gallery now supports all Azure public cloud regions as target regions and all generally available Azure public cloud regions, with the exception of South Africa regions as a source region. Check the list of source and target regions.
In the coming months, this feature will also be available in sovereign clouds.

Quota

The default quota that is supported on Shared Image Gallery resources include:

  • 100 shared image galleries per subscription per region
  • 1,000 image definitions per subscription per region
  • 10,000 image versions per subscription per region

Users can request for a higher quota based on their requirements. Learn how you can track usage in your subscription.

Pricing

There is no extra charge for using the Shared Image Gallery service. You will only pay for the following:

  1. Storage charges for image versions and replicas in each region, source and target
  2. Network egress charges for replication across regions

Getting started

Let’s take a quick look at what you can do with Shared Image Gallery.

Manage your images better

We introduced three new Azure Resource Manager resources as part of the feature—gallery, image definition, and image version—which helps you organize images in logical groups. You can also publish multiple versions of your images as and when you update or patch the applications.

Hierarchy of resources in Azure Shared Image Gallery.

Share images across subscriptions and Azure Active Directory tenants

One of the key capabilities that Shared Image Gallery provides is a way to share your images across subscriptions. Since all three newly introduced constructs are Azure Resource Manager resources, you can use Azure role-based access control (RBAC) to share your galleries or image definitions with other users who can then deploy VMs in their subscriptions, even across Azure Active Directory tenants.

A few common scenarios where sharing images across tenants becomes useful are:

  1. A company acquires another and suddenly the Azure infrastructure is spread across Azure AD tenants.
  2. A company with multiple subsidiaries that use Azure is likely to have multiple Azure AD tenants.

Virtual machine image sharing across Azure Active Directory tenants

Learn more about how to share your images across tenants.

Distribute your images globally

We understand that business happens at a global scale and you don’t want your organization to be limited by the platform. Shared Image Gallery provides a way to globally distribute your images based on your organizational needs. You only need to specify the target regions and Shared Image Gallery will replicate your image versions to the regions specified.

Global replication of virtual machine images

Scale your deployments

With Shared Image Gallery, you can now deploy up to a 1,000 VM instances in a VM scale set, an increase from 600 with managed images. We also introduced a concept of image replicas for better deployment performance, reliability, and consistency. You can set a different replica count in each target region based on your regional scale needs. Since each replica is a deep copy of your image, you can scale your deployments linearly with each extra replica versus a managed image.

Distribution of virtual machine create calls across replicas

Learn more about how to use replicas.

Make your images highly available

With the general availability of Shared Image Gallery, you can choose to store your images in zone-redundant storage (ZRS) accounts in regions with Availability Zones. You can also choose to specify storage account type for each of the target regions. Check the regional availability of zone-redundant storage.

Virtual machine memory allocation and placement on Azure Stack

$
0
0

Customers have been using Azure Stack in a number of different ways. We continue to see Azure Stack used in connected and disconnected scenarios, as a platform for building applications to deploy both on-premises as well as in Azure. Many customers want to just migrate existing applications over to Azure Stack as a starting point for their hybrid or edge journey.

Whatever you decide to do once you’ve started on Azure Stack, it’s important to note that in any scenario, some functions are done differently here. One such function is capacity planning. As an operator of the Azure Stack, you have a responsibility to accurately plan for when additional capacity needs to be added to the system. To plan for this, it is important to understand how memory as a function of capacity is consumed in the system. The purpose of this post is to detail how Virtual Machine (VM) placement works in Azure Stack with a focus on the different components that come to play when deciding the available memory for capacity planning.

Azure Stack is built as a hyper-converged cluster of compute and storage. The convergence allows for the sharing of the hardware, referred to as a scale unit. In Azure Stack, a scale unit provides the availability and scalability of resources. A scale unit consists of a set of Azure Stack servers, referred to as hosts or nodes. The infrastructure software is hosted within a set of VMs and shares the same physical servers as the tenant VMs. All Azure Stack VMs are then managed by the scale unit’s Windows Server clustering technologies and individual Hyper-V instances. The scale unit simplifies the acquisition and management Azure Stack. The scale unit also allows for the movement and scalability of all services across Azure Stack, tenant and infrastructure.

You can review a pie chart in the administration portal that shows the free and used memory in Azure Stack like below:

Capacity Chart on the Azure Stack Administrator Portal

Figure 1: Capacity Chart on the Azure Stack Administrator Portal

The following components consume the memory in the used section of the pie chart:

  1. Host OS usage or reserve – This is the memory used by the operating system (OS) on the host, virtual memory page tables, processes that are running on the host OS, and the spaces direct memory cache. Since this value is dependent on the memory used by the different Hyper-V processes running on the host, it can fluctuate.
  2. Infrastructure services – These are the infrastructure VMs that make up Azure Stack. As of the 1904 release version of Azure Stack, this entails approximately 31 VMs that take up 242 GB + (4 GB x number of nodes) of memory. The memory utilization of the infrastructure services component may change as we work on making our infrastructure services more scalable and resilient.
  3. Resiliency reserve – Azure Stack reserves a portion of the memory to allow for tenant availability during a single host failure as well as during patch and update to allow for successful live migration of VMs.
  4. Tenant VMs – These are the tenant VMs created by Azure Stack users. In addition to running VMs, memory is consumed by any VMs that have landed on the fabric. This means that VMs in “Creating” or “Failed” state, or VMs shut down from within the guest, will consume memory. However, VMs that have been deallocated using the stop deallocated option from portal, powershell, and cli will not consume memory from Azure Stack.
  5. Add-on RPs – VMs deployed for the Add-on RPs like SQL, MySQL, App Service etc.

Capacity usage on a 4-node Azure Stack

Figure 2: Capacity usage on a 4-node Azure Stack

In Azure Stack, tenant VM placement is done automatically by the placement engine across available hosts. The only two considerations when placing VMs is whether there is enough memory on the host for that VM type, and if the VMs are a part of an availability set or are VM scale sets. Azure Stack doesn't over-commit memory. However, an over-commit of the number of physical cores is allowed. Since placement algorithms don't look at the existing virtual to physical core over-provisioning ratio as a factor, each host could have a different ratio.

Memory consideration: Availability sets/VM scale sets

To achieve high availability of a multi-VM production system in Azure Stack, VMs should be placed in an availability set that spreads them across multiple fault domains, that is, Azure Stack hosts. If there is a host failure, VMs from the failed fault domain will be restarted in other hosts, but if possible, kept in separate fault domain from the other VMs in the same availability set. When the host comes back online, VMs will be rebalanced to maintain high availability. VM scale sets use availability sets on the back end and make sure each scale set VM instance is placed in a different fault domain. Since Azure Stack hosts can be filled up at varying levels prior to trying placement, VMs in an availability set or VMSS may fail at creation due to the lack of capacity to place the VM/ VMSS instances on separate Azure Stack hosts.

Memory consideration: Azure Stack resiliency resources

Azure Stack is designed to keep VMs running that have been successfully provisioned. If a host goes offline because of a hardware failure or needs to be rebooted, or if there is a patch and update of Azure Stack hosts, an attempt is made to live migrate the VMs executing on that host to another available host in the solution. 

This live migration can only be achieved if there is reserved memory capacity to allow for the restart or migration to occur. Therefore, a portion of the total host memory is reserved and unavailable for tenant VM placement.

Learn more about the calculation for resiliency reserve. Below is a brief for this calculation:

Available Memory for VM placement = Total Host Memory – Resiliency Reserve – Memory used by running tenant VMs - Azure Stack Infrastructure Overhead

Resiliency reserve = H + R * ((N-1) * H) + V * (N-2)

Where:

  • H = Size of single host memory
  • N = Size of Scale Unit (number of hosts)
  • R = Operating system reserve/Memory used by the Host OS, which is .15 in this formula
  • V = Largest VM in the scale unit

Azure Stack Infrastructure Overhead = 242 GB + (4 GB x # of nodes). This accounts for the approximately 31 VMs are used to host Azure Stack's infrastructure .

Memory used by the Host OS = 15 percent (0.15) of host memory. The operating system reserve value is an estimate and will vary based on the physical memory capacity of the host and general operating system overhead.

The value V, largest VM in the scale unit, is dynamically based on the largest tenant VM deployed. For example, the largest VM value could be 7 GB or 112 GB or any other supported VM memory size in the Azure Stack solution. We pick the size of the largest VM here to have enough memory reserved so a live migration of this large VM would not fail. Changing the largest VM on the Azure Stack fabric will result in an increase in the resiliency reserve in addition to the increase in the memory of the VM itself.

Figure 3 is a graph of a 12 host Azure Stack with 384 GB memory per host and how the amount of available memory varies depending on the size of the largest VM on the Azure Stack. The largest VM in these examples is the only VM that has been placed on the Azure Stack.

image

Figure 3: Available memory with changing Maximum VM size

The resiliency reserve is also a function of the size of the host. Figure 4 below shows available memory on different host memory size Azure Stacks given the possible largest VM memory sizes.

image

Figure 4: Available memory with different largest VMs over varied host memory

The above calculation is an estimate and subject to change based on the current version of Azure Stack. Ability to deploy tenant VMs and services is based on the specifics of the deployed solution. This example calculation is just a guide and not the absolute answer of the ability to deploy VMs.

Please keep the above considerations in mind while capacity planning for Azure Stack. We have also published an Azure Stack Capacity Planner that can help ease your capacity planning needs. Find more information by looking through some frequently asked questions.

Taking advantage of the new Azure Application Gateway V2

$
0
0

We recently released Azure Application Gateway V2 and Web Application Firewall (WAF) V2. These SKUs are named Standard_v2 and WAF_v2 respectively and are fully supported with a 99.95% SLA. The new SKUs offer significant improvements and additional capabilities to customers:

  • Autoscaling allows elasticity for your application by scaling the application gateway as needed based on your application’s traffic pattern. You no longer need to run application gateway at peak provisioned capacity, thus significantly saving on the cost.
  • Zone redundancy enables your application gateway to survive zonal failures, offering better resilience to your application
  • Static VIP feature ensures that your endpoint address will not change over its lifecycle
  • Header Rewrite allows you to add, remove or update HTTP request and response headers on your application gateway, thus enabling various scenarios such as HSTS support, securing cookies, changing cache controls etc. without the need to touch your application code.
  • Faster provisioning and configuration update time
  • Improved performance for your application gateway helps reduce overall cost

Diagram showing improved capabilities in V2

We highly recommend that customers use the V2 SKUs instead of the V1 SKU for new applications/workloads.

Customers who have existing applications behind the V1 SKUs of Application Gateway/WAF should also consider migrating to the V2 SKUs sooner rather than later. These are some of the reasons:

  • Features and improvements: You can take advantage of the improvements and capabilities mentioned above and continue to take advantage of new features in our roadmap as they are released. Going forward, most of the new features in our roadmap will only be released on the V2 SKU.
  • Cost: V2 SKU may work out to be overall cheaper for you relative to V1 SKU. See our pricing page for more information on V2 SKU costs.
  • Platform support in future: We will be disabling creation of new gateways on the V1 SKU at some point in the future, advance notification will be provided so customers have sufficient time to migrate. Migrating your gateways to the V2 SKU sooner rather than later will allow us to allocate more of our engineering and support resources to the V2 SKU sooner.  Help us help you!

Guided migration – Configuration replication to V2 SKU gateway

While customers can certainly do the migration on their own by manually configuring new V2 gateways with the same configuration as their V1 gateways, in reality, for many customers this could be quite complicated and error prone due to the number of configuration touchpoints that may be involved. To help with this, we have recently published a PowerShell script along with documentation that helps replicate the configuration on a V1 gateway to a new V2 gateway.

The PowerShell script requires a few inputs and will seamlessly copy over the configuration from a specified V1 gateway to a new V2 gateway, the V2 gateway will be automatically created for you). There are a few limitations, so please look at those before using the script and visit our mini FAQ for additional guidance.

Switching over traffic to new V2 endpoints

This will be completely up to the customer as the specifics of how the traffic flow through the application gateway is architected, vary from application to application and customer to customer. However, we have provided guidance for some scenarios of traffic flow. We will consider future tooling to help customers with this phase, especially for customers using Azure DNS or Azure Traffic Manager to direct traffic to application gateways.

Feedback

As always, we are interested in hearing your valuable feedback. For specific feedback on the migration to the V2 SKU, you are welcome to email us at appgwmigrationsup@microsoft.com. For general feedback on Application Gateways, please use our Azure Feedback page.

Microsoft FHIR Server for Azure extends to SQL

$
0
0

This blog post was co-authored by Doug Seven, Senior Director, Microsoft Health Engineering and Michael Hansen, Senior Program Manager, Microsoft Health Engineering.

Since the launch of the open source FHIR Server for Azure on GitHub last November, we have been humbled by the tremendously positive response and surge in the use of FHIR in the healthcare community. There has been great interest in Microsoft expanding capabilities in the FHIR service, and today we are pleased to announce that the open source FHIR Server for Azure now supports both Azure Cosmos DB and SQL backed persistence providers. With the SQL persistence provider, developers will be able to perform complex search queries that join information across multiple FHIR resource types and leverage transactions.

Why we are adding SQL to the FHIR Server for Azure

The FHIR service relies on a data persistence provider for storing and searching FHIR resources. The initial release of the FHIR service included a data persistence provider based on Azure Cosmos DB, which is a globally distributed, multi-model database for any scale. Azure Cosmos DB is ideal for healthcare scenarios where users need arbitrary scale and low latency. As our developer community continues to expand healthcare applications with FHIR, new use cases and features of the FHIR specification have emerged that are a natural fit for a relational database such as SQL Server. For example, SQL can enable search queries that would correspond to a database join (chained searches, _include, _revinclude) and enable atomic transactions where the entire set of changes succeed or fail as a single entity.

Many healthcare organizations are already aware of the benefits of SQL for managing relational data. Combining the power of SQL with data in the native FHIR format provides new options and applications to accelerate the use of FHIR. To get started, the latter half of this blog post gives you an easy to use demonstration on how to deploy the SQL based FHIR service and interact with the API.

Introducing FHIR R4

In December 2018, HL7 announced FHIR v4.0.0 (aka FHIR R4). This release is significant because a number of the most mature resource definitions became normative, meaning they will be backwards compatible in future releases. We are pleased to announce support for FHIR R4 in the open source FHIR Server for Azure. Available immediately in open source, this enables developers to configure their FHIR server to use the latest iteration of the FHIR specification with trust that the thirteen most commonly used resources, including Patient, Observation, and StructureDefinition, are “locked” and expected to remain consistent in future iterations of the specification.

Getting started in FHIR

Starting with FHIR is easy. Both the open source FHIR Server for Azure and the Azure API for FHIR managed service allow you to provision a FHIR service in just a few minutes. The FHIR Server for Azure repository includes an Azure Resource Manager template for deploying the FHIR server with a SQL persistence provider.

Deployment

To deploy this template in Azure use this deployment link, which will open a custom template deployment form.

Azure Resource Manager template for deploying the FHIR server wirh a SQL persistence provider

At a minimum, you must provide a resource group, service name, and SQL service administrator password. If no security details (authority and audience) are provided, the instance will be provisioned with no security. For details on provisioning with security enabled, see the FHIR Server deployment instructions.

Testing

After deployment, use a tool like Postman to test the FHIR service. First check the capability statement from the /metadata endpoint.

Check the capability statement from the /metadata endpoint in Postman

Next, insert a patient with a POST to /Patient.

Insert a patient with a POST to /Patient

Finally, test that you can search patients.

Search for patients in Postman

FHIR in your organization

Whether you chose to leverage Azure Cosmos DB or the SQL persistence provider, there are many choices in terms of database schema and indexing. The open source FHIR Server for Azure showcases a design choice that will cover a broad set of use cases and enables you to optimize as needed for your specific use case. We encourage you to explore the source code, deploy the SQL configuration of the server, and provide us with feedback!

We’re thrilled to see the industry embracing FHIR! This week our team is at the HL7 FHIR DevDays, hosted on the Microsoft campus in Redmond, Washington. If you’re attending DevDays and are interested in learning more about the design consideration, please attend the session entitled “Building a SQL persistence provider for a FHIR server” or stop by the Microsoft coding table!

FHIR® is the registered trademark of HL7 and is used with the permission of HL7.

Create a private gallery for self-hosted Visual Studio extensions

$
0
0

Most Visual Studio extension authors publish their extensions to the public Marketplace to allow everyone to install them and benefit from the large and open ecosystem. However, some companies create extensions for internal use only. A private gallery allows them to distribute the extensions easily with the same auto-update capabilities enjoyed by any public Marketplace extension. And now, we’ve streamlined the process even more so that you can easily create a private gallery for your team or organization.

Visual Studio 2010 introduced support for private galleries, but few used them due to lacking samples and tooling. A lot has changed since then and the private gallery support has seen several updates to support extension packs and other more recent features.

The anatomy of a private gallery

A private gallery is an ATOM feed XML File that contains meta data about the extensions. Registering the gallery with Visual Studio can be done either by the user manually under Tools -> Options or by an extension using a custom .pkgdef file (example).

The ATOM feed can be located on a web server, file system or file share. After registering the gallery, a new category appears under the Online tab in the Extension Manager dialog as shown in the first screenshot above.

Create the ATOM feed

The open source tool Private Gallery Creator makes it straightforward to create the ATOM feed. Download the executable and run it in a folder containing the VSIX files you wish to include in the feed. The tool analyzes the VSIX files and extracts the metadata needed to produce a file called feed.xml in the same folder.

You could also set up a CI/CD pipeline that automatically executes the tool to update the feed. In addition, the tool has a “watch” feature to automatically produce a new feed any time a VSIX file is added or modified in the same folder.

Set it up in only four steps

Here’s a recap of how to set up a private gallery:

  1. Put your .vsix files into an empty folder accessible to all consumers of the gallery
  2. Download PrivateGalleryCreator.exe executable into the same folder
  3. Double-click PrivateGalleryCreator.exe to produce the feed.xml file
  4. Register the feed in Visual Studio manually or from an extension’s .pkgdef file

In summary

You can learn more about private galleries from the documentation and by checking the Private Gallery Creator project on GitHub. There are a few public offerings for hosted private galleries such as MyGet and Open VSIX Gallery that may be worth looking into as well.

We’d love to hear about how you use private galleries today or why you don’t use them, so please sound off in the comments below.

The post Create a private gallery for self-hosted Visual Studio extensions appeared first on The Visual Studio Blog.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>