Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

What’s new in Azure DevOps Sprint 146 Update

$
0
0

In this update, you can now simplify the organization of your work using the Basic process, easily create tables and add a query right into the Wiki plus more, and see updates to Azure Pipelines. Check out the video to learn more about these features.

<span class=”mce_SELRES_start” style=”width: 0px; line-height: 0; overflow: hidden;”></span>

You can find more information about ServiceNow Change Management in Azure Pipelines here.

Check out the release notes for more details plus more features released this sprint like the GitHub Enterprise support and automatic GitHub service connections in build pipelines.


Protected: Announcing an easier way to use latest certificates from Key Vault

$
0
0

This content is password protected. To view it please enter your password below:

New capabilities in Microsoft 365 empower healthcare professionals

Lighting up healthcare data with FHIR®: Announcing the Azure API for FHIR

$
0
0

In the last several years we’ve seen fundamental transformation in healthcare data management, but the biggest, and perhaps most important shift, has been in how healthcare organizations think about cloud technology and their most sensitive health data. Healthcare leaders have transitioned from asking “Why should I manage healthcare data in the cloud?” and are now asking “How?”.

The change in the question may seem subtle, but the rigor required to ensure the highest level of privacy, security, and management of Protected Health Information (PHI) in the cloud has been a barrier to entry for much of the healthcare ecosystem. Compounding the difficulty is the state of data: multiple datasets, fragmented sources of truth, inconsistent formats, and exponential growth of data types.

We are now seeing, almost daily, new breakthroughs with applied machine learning on health data. But to truly apply machine learning at scale in the healthcare industry, we must ensure a secure and trusted pathway to manage that data in the cloud. Moving data into the cloud in its current state can reduce cost, but cost isn’t the only measure. Healthcare leaders are thinking about how they bring their data into the cloud while increasing opportunities to use and learn from that data: How do we ensure the privacy of patient data? How do we retain control and access management for our data at scale? How do we bring data into the cloud in a way that will accelerate machine learning for the future?  

And today I am thrilled to announce Azure technology that begins to answer the question of “how”: Azure API for FHIR®.

Azure API for FHIR®: Your health data. Unlocked with FHIR.

Data management in the open source FHIR (Fast Healthcare Interoperability Resource) standard is becoming turnkey for interoperability and machine learning on healthcare data. There is a growing need for healthcare partners to build and maintain FHIR services that exchange and manage data in the FHIR format.

Azure API for FHIR offers exchange of data via a FHIR API and a managed Platform as a Service (PaaS) offering in Azure, designed for management and persistence of PHI data in the native FHIR format. The FHIR API and data store enables you to securely connect and interact with any system that utilizes FHIR APIs, and Microsoft takes on the operations, maintenance, updates and compliance requirements in the PaaS offering, so you can free up your own operational and development resources.

Key features of the Azure API for FHIR will include:

  • Provision and start running in just a few minutes
  • High performance, low latency
  • Enterprise grade, managed FHIR service
  • Role Based Access Control (RBAC) – allowing you to manage access to your data at scale
  • Audit log tracking for access, creation, modification, and reads within each data store
  • SMART on FHIR functionality
  • Secure compliance in the cloud: ISO 27001:2013 certified, supports HIPAA and GDPR, and built on the HITRUST certified Azure platform
  • Data is isolated to a unique database per API instance
  • Protection of your data with multi-region failover

The cost-effective way to start in the cloud

Because we believe it's important to invest in the FHIR standard, you pay only for underlying database usage and data transfer when using the Azure API for FHIR.

The cloud environment you choose for healthcare applications is critical. You want elastic scale so you pay only for the throughput and storage you need. The Azure services that power Azure API for FHIR are designed for rapid performance no matter what size datasets you’re managing. The data persistence layer in the Azure API for FHIR leverages Azure Cosmos DB, which guarantees latencies at the 99th percentile and guarantees high availability with multi-homing capabilities.

Those with experience in healthcare data management may wonder:  we have HL7 standards in the industry already, why do we need FHIR to bring data into the cloud? HL7 has served the industry well since its first implementations in the 1980s. But as it’s evolved, customizations of HL7 can translate to a heavy lift for the future of healthcare learning: data science. FHIR is gaining traction because it provides a consistent, open source, extensible data standard that can scale as we learn. In order to accelerate machine learning on healthcare data, organizations are shifting data to the FHIR format as they transition into the cloud:  saving both time and money.

Where can I apply the Azure API for FHIR?

Azure API for FHIR is intended for customers developing solutions that integrate healthcare data from one or more systems of record. The API promotes the use of ingesting, managing, and persisting that data in native FHIR resources. Leveraging an open source standard (FHIR) enables interoperability for data sharing both within and outside of your ecosystem and helps accelerates the machine learning process on data is normalized in FHIR.

Our customers are already seeing powerful scenarios for FHIR applications:

Startup/IoMT:
Fred Hutchinson Cancer Research in Seattle, WA is developing innovative IoMT and patient applications to remotely monitor patients undergoing chemotherapy. While in development, they needed a secure, fully managed backend service to handle patient data across multiple participating hospitals. To ensure they could design once and integrate quickly into a broad number hospital EHR systems, they are using Azure API for FHIR and a SMART on FHIR implementation.

Provider Ecosystems:
University of Pittsburgh Medical Center has been working with Microsoft FHIR offerings in their hospital systems: “The ability to one-click deploy a FHIR server as a managed service allows us to think more about our applications and customer needs, and less about the plumbing required to store and represent clinical data.” – Brian Kolowitz, director of product management, UPMC Enterprises.

Research:
Associate Dean of Research Information Technology at University of Michigan, Dr. Sachin Kheterpal, is leading efforts to streamline data ingestion and management for Michigan Medicine’s research teams. To drive faster research innovation and ML development, University of Michigan will be piloting the management of data through the Azure API instead of their on-premise systems. “We’re expecting to reduce operational workloads, increase data control, improve data de-identification, and enable our data scientists to move faster with data normalized in the FHIR standard that benefits from a community of developers based upon FHIR resources.”

If you want additional support as you integrate FHIR, we’ve also been working with over 25 partners in our Early Access Program. ISV and SI partners in the Early Access Program understand the technical details and applications for Azure API for FHIR and can help get your data into FHIR and the cloud even more easily.

Investing in FHIR to accelerate AI in healthcare

The Azure ecosystem already has robust components for Microsoft partners to build secure and compliant health solutions in the cloud on their own, but we’re going to continue making it easier. We’re focused on delivering turnkey cloud solutions so our healthcare partners can focus their attention on innovation. Check out Azure API for FHIR and do more with your health data.


FHIR® is the registered trademark of HL7 and is used with the permission of HL7 

Analytics in Azure is up to 14x faster and costs 94% less than other cloud providers. Why go anywhere else?

$
0
0

It’s true. With the volume and complexity of data rapidly increasing, performance and security are critical requirements for analytics. But not all analytics services are built equal. And not all cloud storage is built for analytics.

Only Azure provides the most comprehensive set of analytics services from data ingestion to storage to data warehousing to machine learning and BI. Each of these services have been finely tuned to provide industry leading performance, security and ease of use, at unmatched value. In short, Azure has you covered.

Unparalleled price-performance

When it comes to analytics, price-performance is key. In July 2018, GigaOm published a study that showed that Azure SQL Data Warehouse was 67 percent faster and 23 percent cheaper than Amazon Web Service RedShift.

That was then. Today, we’re even better!

In the most recent study by GigaOm, they found that Azure SQL Data Warehouse is now outperforming the competition up to a whopping 14x times. No one else has produced independent, industry-accepted benchmarks like these. Not AWS Redshift or Google BigQuery. And the best part? Azure is up to 94 percent cheaper.

Price performance comparison

This industry leading price-performance extends to the rest of our analytics stack. This includes Azure Data Lake Storage, our cloud data storage service, and Azure Databricks, our big data processing service. Customers like Newell Brands – worldwide marketer of consumer and commercial products such as Rubbermaid, Mr. Coffee and Oster – recently moved their workload to Azure and realized significant improvements.

“Azure Data Lake Storage will streamline our analytics process and deliver better end to end performance with lower cost.” 

– Danny Siegel, Vice President of Information Delivery Systems, Newell Brands

Secure cloud analytics

All the price-performance in the world means nothing without security. Make the comparison and you will see Azure is the most trusted cloud in the market. Azure has the most comprehensive set of compliance offerings, including more certifications than any other cloud vendor combined with advanced identity governance and access management with Active Directory integration.

For analytics, we have developed additional capabilities to meet customers’ most stringent security requirements. Azure Data Lake Storage provides multi-layered security including POSIX compliant file and folder permissions and at-rest encryption. Similarly, Azure SQL Data Warehouse utilizes machine learning to provide the most comprehensive set of security capabilities across data protection, access control, authentication, network security, and automatic threat detection.

Insights for all

What’s the best compliment to Azure Analytics’ unmatched price-performance and security? The answer is Microsoft Power BI.

Power BI’s ease of use enables everyone in your organization to benefit from our analytics stack. Employees can get their insights in seconds from all enterprise data stored in Azure. And without limitations on concurrency, Power BI can be used across teams to create the most beautiful visualizations that deliver powerful insights.

Leveraging Microsoft’s Common Data Model, Power BI users can easily access and analyze enterprise data using a common data schema without needing complex data transformation. Customers looking for petabyte-scale analytics can leverage Power BI Aggregations with Azure SQL Data Warehouse for rapid query. Better yet, Power BI users can easily apply sophisticated AI models built with Azure. Powerful insights easily accessible to all.

Customers like Heathrow Airport, one of the busiest airports in the world, are empowering their employees with powerful insights:

“With Power BI, we can very quickly connect to a wide range of data sources with very little effort and use this data to run Heathrow more smoothly than ever before. Every day, we experience a huge amount of variability in our business. With Azure, we’re getting to the point where we can anticipate passenger flow and stay ahead of disruption that causes stress for passengers and employee.”

– Stuart Birrell, Chief Information Officer, Heathrow Airport

Future-proof

We continue to focus on making Azure the best place for your data and analytics. Our priority is to meet your needs for today and tomorrow.

So, we are excited to make the following announcements:

  • General availability of Azure Data Lake Storage: The first cloud storage that combines the best of hierarchical files system and blob storage.
  • General availability of Azure Data Explorer: A fast, fully managed service that simplifies ad hoc and interactive analysis over telemetry, time-series, and log data. This service, powering other Azure services like Log Analytics, App Insights, Time Series Insights, is useful to query streaming data to identify trends, detect anomalies, and diagnose problems.
  • Preview of new Mapping Data Flow capability in Azure Data Factory: Visual Flow provides a visual, zero-code experience to help data engineers to easily build data transformations. This complements the Azure Data Factory’s code-first experience to enable data engineers of all skill levels to collaborate and build powerful hybrid data transformation pipelines.

Azure provides the most comprehensive platform for analytics. With these updates, Azure solidifies its leadership in analytics.

Azure Data Lake Storage diagram

Watch this space. There’s more to come!

Get started today

To learn more about how Azure provides the best price-performance, get started today.

Individually great, collectively unmatched: Announcing updates to 3 great Azure Data Services

$
0
0

As Julia White mentioned in her blog today, we’re pleased to announce the general availability of Azure Data Lake Storage Gen2 and Azure Data Explorer. We also announced the preview of Azure Data Factory Mapping Data Flow. With these updates, Azure continues to be the best cloud for analytics with unmatched price-performance and security. In this blog post we’ll take a closer look at the technical capabilities of these new features.

Azure Data Lake Storage - The no compromise Data Lake

Azure Data Lake Storage (ADLS) combines the scalability, cost effectiveness, security model, and rich capabilities of Azure Blob Storage with a high-performance file system that is built for analytics and is compatible with the Hadoop Distributed File System. Customers no longer have to tradeoff between cost effectiveness and performance when choosing a cloud data lake.

One of our key priorities was to ensure that ADLS is compatible with the Apache ecosystem. We accomplished this by developing the Azure Blob File System (ABFS) driver. The ABFS driver is officially part of Apache Hadoop and Spark and is incorporated in many commercial distributions. The ABFS driver defines a URI scheme that allows files and folders to be distinctly addressed in the following manner:

abfs[s]://file_system@account_name.dfs.core.windows.net/<path>/<path>/<filename>

It is important to note that the file system semantics are implemented server-side. This approach eliminates the need for a complex client-side driver and ensures high fidelity file system transactions.

To further boost analytics performance, we implemented a hierarchical namespace (HNS) which supports atomic file and folder operations. This is important because it reduces the overhead associated with processing big data on blob storage. This speeds up job execution and lowers cost because fewer compute operations are required.

The ABFS driver and HNS significantly improve ADLS’ performance, removing scale and performance bottlenecks.  This performance enhancement is now available at the same low cost as Azure Blob Storage.

ADLS offers the same powerful data security capabilities built into Azure Blob Storage, such as:

  • Encryption of data in transit and at rest via TLS 1.2
  • Storage account firewalls
  • Virtual network integration
  • Role-based access security

In addition, ADLS’ file system provides support for POSIX compliant access control lists (ACLs). With this approach, you can provide granular security protection that restricts access to only authorized users, groups, or service principals and provides file and object data protection.

Azure Data Lake Storage diagram.jpg

ADLS is tightly integrated with Azure Databricks, Azure HDInsight, Azure Data Factory, Azure SQL Data Warehouse, and Power BI, enabling an end-to-end analytics workflow that delivers powerful business insights throughout all levels of your organization. Furthermore, ADLS is supported by a global network of big data analytics ISV’s and system integrators, including Cloudera and Hortonworks.

Next steps

Azure Data Explorer – The fast and highly scalable data analytics service

Azure Data Explorer (ADX) is a fast, fully managed data analytics service for real-time analysis on large volumes of streaming data. ADX is capable of querying 1 billion records in under a second with no modification of the data or metadata required. ADX also includes native connectors to Azure Data Lake Storage, Azure SQL Data Warehouse, and Power BI and comes with an intuitive query language so that customers can get insights in minutes.

Designed for speed and simplicity, ADX is architected with two distinct services that work in tandem: The Engine and Data Management (DM) service. Both services are deployed as clusters of compute nodes (virtual machines) in Azure.

Azure Data Explorer diagram

The Data Management (DM) service ingests various types of raw data and manages failure, backpressure, and data grooming tasks when necessary. The DM service also enables fast data ingestion through a unique method of automatic indexing and compression.

The Engine service is responsible for processing the incoming raw data and serving user queries. It uses a combination of auto scaling and data sharding to achieve speed and scale. The read-only query language is designed to make the syntax easy to read, author, and automate. The language provides a natural progression from one-line queries to complex data processing scripts for efficient query execution.

ADX is available in 41 Azure regions and is supported by a growing ecosystem of partners, including ISV’s and system integrators.

Next steps

Azure Data Factory Mapping Data Flow – Visual, zero-code experience for data transformation

Azure Data Factory (ADF) is a hybrid cloud-based data integration service for orchestrating and automating data movement and transformation. ADF provides over 80 built-in connectors to structured, semi-structured, and unstructured data sources.

With Mapping Data Flow in ADF, customers can visually design, build, and manage data transformation processes without learning Spark or having a deep understanding of their distributed infrastructure.

Azure Data Factory Mapping Data Flow

Mapping Data Flow combines a rich expression language with an interactive debugger to easily execute, trigger, and monitor ETL jobs and data integration processes.

Azure Data Factory is available in 21 regions and expanding, and is supported by a broad ecosystem of partners including ISV’s and system integrators.

Next steps

Azure is the best place for data analytics

With these technical innovations announced today, Azure continues to be the best cloud for analytics. Learn more why analytics in Azure is simply unmatched.

Azure DevOps Projects supporting Azure Cosmos DB and Azure Functions

$
0
0

With Azure DevOps Projects we want to make it is easy for you to set up a fully functional DevOps pipeline tailored to the development language and application platform you want to leverage.

We have been making continuous enhancements to Azure DevOps Projects and in the latest deployment now available to all customers, we have added support for Azure Cosmos DB and Azure Functions as target destinations for your application. This builds on the existing Azure App Service, Azure SQL Database, and Azure Kubernetes Service (AKS) support.

The support of Azure Cosmos DB in Azure DevOps Projects means that you will now be able to create a skeleton two tier Node.js application backed by Azure Cosmos DB in just a few clicks. Azure DevOps Projects creates all the scaffolding for your pipeline to give you everything you need to develop, deploy, and monitor your application including:

  • A Git code repository hosted in Azure Repos with a skeleton Node.js application
  • A CI/CD pipeline in Azure Pipelines for deploying the database tier to Azure Cosmos DB and the web-tier on Azure
  • Web Apps for containers, Azure Kubernetes Service, or as a Windows Web App in Azure
  • Provisioning all the Azure resources in your subscription required for the application
  • Application Insights integration for monitoring your application

Choose an application framework screenshot

After using Azure DevOps Projects to scaffold your application from the Azure portal you can then access the code and the CI/CD pipeline using Azure Repos and Azure Pipelines respectively.

With support for Azure Functions in Azure DevOps Projects, you will be able to create a skeleton .Net or a sample Node.js serverless application in just a few clicks. Like Azure Cosmos DB, with this workflow you will have everything you need to develop, deploy, and monitor your application including the Git Code repo, CI/CD pipeline, Application Insights, and necessary Azure resources.

Select an Azure service to deploy the application screenshot

These features are now available in the Azure portal. Get started by creating Azure DevOps Projects now. To learn more, please take a look at the documentation, “Azure DevOps Projects.”

Azure Monitor January 2019 updates

$
0
0

Azure Monitor, which now includes Log Analytics and Application Insights, provides sophisticated tools for collecting and analyzing telemetry. It allows you to maximize the performance and availability of your cloud, on-premises resources, and applications. It helps you understand how your applications are performing and proactively identifies issues affecting them and the resources on which they depend.

Learn more about how you can get started with Azure Monitor. Now let’s check out what’s new from the past month.

AcknowledgementsGartner peer insights customers' choice 2019 seal

First, a huge thank you to our customers for once again naming Microsoft a Gartner Peer Insights Customers’ Choice for Application Performance Monitoring Suites for its System Center Operations Manager, Microsoft Azure Application Insights, and System Center Global Service Monitor applications.

Application Insights

Application Insights is the application performance monitoring (APM) service of Azure Monitor, providing observability for Java, .NET, and Node.js web services, plus client-side JavaScript apps.

End-to-end transactions

The end-to-end transactions view now supports time scrubbing. Click and drag over a period of time to filter the view to that time range and analyze it in more detail.

Screenshot of end-to-end transaction details

Performance and failures

We’ve squashed a handful of bugs in the performance and failures tools:

  • The Roles tab now preserves role selection while navigating from the application map
  • The Roles tab no longer shows duplicated role instances with empty role names
  • The details pane no longer shows "…" beside items like event times that shouldn't have had this button

Availability

In the availability tool, we fixed a bug where navigating from the availability scatter plot wouldn’t show the closest result with a web test available in the end-to-end transactions view.

Application map

We’ve made the application map even easier to read and navigate:

  • Added a “Zoom to fit” button
  • Grouped nodes are now shown as a stack to make them easier to distinguish
  • Added “expand” and “collapse” buttons for the insights cards in the flyout menu
  • Nodes without incoming connections are now shown closer to their first outbound connection on the map, which should make many maps easier to read
  • Better support for proxies (multiple services called through the same host name) by removing the proxy dependency node and directly linking the services
  • Maps with many complex grouped edges will now show statistics

Screenshot of application map

Pricing calculator

We updated the Azure pricing calculator to make it easier to estimate your Application Insights bills. Now, you can enter an estimate of traffic to your app and we’ll show prices for apps that have received similar levels of traffic.

Application Insights SDK

We released v2.9.0 of the .NET SDK and v2.6.0 of the ASP.NET Core SDK, each with several performance improvements and bug fixes.

We also shipped a standalone version of the Application Insights Provider for ILogger, which adds scopes support, the most requested feature from our GitHub community.

OpenCensus SDK

We released an alpha version of our C# OpenCensus SDK. OpenCensus is a cross-industry open source project, working towards a single distribution of libraries for metrics and distributed tracing with minimal overhead.

Log Analytics

Log Analytics blade renamed

The Log Analytics blade in the Azure portal has been renamed Log Analytics workspaces. This change clarifies that this blade is intended to manage your workspaces by connecting data sources, installing solutions, measuring cost, and more. You can also use the logs tool to query logs of a selected workspace, but remember logs is also available through other paths such as Azure Monitor, Application Insights, virtual machines, and many others.

Log Analytics data encryption

Log Analytics uses Azure Data Explorer to manage its data. The data is stored in Azure Storage, and it is encrypted using a Microsoft managed encryption key. Azure Data Explorer also uses an SSD-backed hot cache that typically stores the last two weeks of data. Starting in January, the data in the SSD caches have also been encrypted in all regions, except for West Central US which will be completed in February.

Protection from losing queries on page refresh

Have you ever worked for a long time on a query and just when you got it right you accidently selected refresh page and lost all your work? Don’t worry, we got you covered. Log Analytics now automatically saves your queries, so they don’t get lost. This feature requires third-party cookies to be enabled in your browser.

Schema updates - Table preview, a new table icon, and featured tables

Log Analytics users love our schema display, so we made it even better. New icons indicate table items in the schema view, making it easier to read. When hovering a table, a new preview item enables a quick execution of a query to view a table’s contents. When you’re in a virtual machine context, the new featured view allows you to see commonly queried tables to allow quicker insights.

Screenshot of filter preview capability

Select filters

The Log Analytics filter pane (preview) is a great way to refine query results without in-depth KQL knowledge. The filter pane suggests fields to query based on our algorithms. However, in some cases the field you need to filter on is not included in the initial list of filter fields. To address this, we allow the customization of filter fields right from the filter pane. Simple select the new Select filter icon and add the field you need:

Screenshot of Select filter button adding desired fields

Then select the field you’d like to see filters for:

Screenshot of select filter fields being updated

Set a title for your charts

Add more context to your charts by using the title keyword to add a title. This is especially useful when pining a chart to dashboards:

Screenshot of title keyword in use to add chart title

Screenshot of Noa dashboard display

Support made easier with your request ID

If a query fails and you’d like to contact support, you can now provide the request ID of the failed query and we’ll be able to investigate what failed this specific run.

To get the request ID select the right-most button on the results status bar and it will be copied to your clipboard.

Screenshot of button copying request ID results to clipboard

Workbooks

Azure Monitor Workbooks are rich, interactive reports that combine text, analytics queries, Azure metrics, and parameters. We’ve made two additions this month:

  • You can now take a workbook and pin all its sections as tiles to an Azure dashboard. To give this a try, select the Pin button in the toolbar of a workbook.
  • Workbooks created from the Troubleshooting Guides tab or from Azure Monitor for Resource Groups now allow you choose in which subscription, resource group, and location to save them.

Screenshot of workbooks application performance insights

Azure Metrics

From the metrics page, you can now pick to which Azure dashboard you pin your metric charts. You can even create a new dashboard right from the same place.

Screenshot of monitor metrics

You can also now lock y-axis boundaries for metrics charts.

Screenshot of locking in y-axis boundaries for metrics charts

Finally, we’ve removed the classic metric explorer tool from the Azure portal now that the transition to the new metrics tool is complete.

Azure Monitor for Virtual Machines (VMs)

Workbooks are now available in Azure Monitor for VMs. Select the View workbooks link to open the gallery and then try out one of the reports. Feel free to customize the report as needed, or duplicate it to start making a new report.

Screenshot of minitor virtual machines preview dashboard

Screenshot of workbooks gallery

Microsoft was named a Gartner Peer Insights Customers’ Choice for Application Performance Monitoring Suites in both June 2018 and January 2019.

 

The Gartner Peer Insights Customers’ Choice badge is a trademark and service mark of Gartner, Inc., and/or its affiliates, and is used herein with permission. All rights reserved. Gartner Peer Insights Customers’ Choice constitute the subjective opinions of individual end-user reviews, ratings, and data applied against a documented methodology; they neither represent the views of, nor constitute an endorsement by, Gartner or its affiliates.


Cloud Commercial Communities webinar and podcast newsletter–February 2019

$
0
0

Welcome to the Cloud Commercial Communities monthly webinar and podcast update. Each month the team focuses on core programs, updates, trends, and technologies that Microsoft partners and customers need to know to increase success using Azure and Dynamics. Make sure you catch a live webinar and participate in live QA. If you miss a session, you can review it on demand. Also, consider subscribing to the industry podcasts to keep up to date with industry news.

image

Upcoming in February 2019

Webinars

  • Optimize Your Marketplace Listing with Featured Apps and Services - Tuesday, February 5, 2019 11:00 AM PST
    Do you have an application or service listed on Azure Marketplace or AppSource? Looking to optimize your listing to be more discoverable by customers? Discoverability in Azure Marketplace and AppSource can be optimized in a variety of ways. Join this session to learn about how you can gain more visibility for your listings by optimizing content, using keywords, adding trials, and about what matters to Microsoft for Featured Apps and Featured Services on Azure Marketplace and AppSource.
  • Leveraging Free Azure Sponsorship to Grow Your Business on Azure - Tuesday, February 12, 2019 10:00 AM PST
    Microsoft has made significant investments in our partners and customers to help them meet today’s complex business challenges and drive business growth. Through Microsoft Azure Sponsorship, partners and customers can get access to free Azure based on their deployment and technical needs. Azure Sponsorship is available to new and existing Azure customers looking to try new partner solutions, and to partners working to build their solutions on Azure.
  • Get the Most Out of Azure with Azure Advisor - Tuesday, February 19, 2019 10:00 AM PST
    Azure Advisor is a free Azure service that analyzes your configurations and usage and provides personalized recommendations to help you optimize your resources for high availability, security, performance, and cost. In this demo-heavy webinar, you’ll learn how to review and remediate Azure Advisor recommendations so you can stay on top of Azure best practices and get the most out of your Azure investment both for your own organization and your customers.
  • Incidents, Maintenance, and Health Advisories: Stay Informed with Azure Service Health - Tuesday, February 26, 2019 10:00 AM PST
    Azure Service Health is a free Azure service that provides personalized alerts and guidance when Azure service issues affect you. It notifies you, helps you understand the impact to your resources, and keeps you updated as the issue is resolved. It can also help you prepare for planned maintenance and changes that could affect the availability of your resources. In this demo-heavy webinar, you’ll learn how to use Azure Service Health keep both your organization and your customers informed about Azure service incidents.
  • Introducing a New Approach to Learning: Microsoft Learn - Wednesday, February 27, 2019 11:00 AM PST
    At Microsoft Ignite 2019, Microsoft launched an exciting new learning platform called Microsoft Learn. During this session, we will provide a demo and overview of the platform, the inspiration and vision of its design, and how we have adapted training to modern learning styles.

Podcasts

Recap for January 2019

Webinars

  • Grow, Build, and Connect with Microsoft for Startups - January 23, 2019 at 11am PST
    Microsoft for Startups is a unique program designed to help startups become a Microsoft business partner, through access to technology, channels, markets and customers. Tune into this session to learn more about the Microsoft for Startups program, a $500 million initiative to provide startups access to both the technology and customer base needed to build and grow their business.
  • Transform Data into Dollars by Enabling Intelligent Retail with Microsoft - January 29, 2019 at 10am PST
    Microsoft is enabling retailers to deliver personalized customer experiences by empowering employees, driving digital transformation, and capturing data-based insights to accelerate growth for our partners and customers.  This 30-minute session will arm partners with real case studies and actionable solutions for each Intelligent Retail scenario with an opportunity for live Q&A with our Retail expert.
  • Azure Marketplace and AppSource Publisher Payouts and Seller Insights - January 30, 2019 at 11am PST
    Azure Marketplace and AppSource is your launchpad to Go-To-Market with Microsoft and promote your offerings to customers. Join this exciting session to learn more about how Azure Marketplace and AppSource Publisher payouts work and gain exposure to the Seller Insights tool within Cloud Partner Portal.

Podcasts

Check out recent podcast episodes at the Microsoft industry experiences team podcast page.

Performance best practices for using Azure Database for PostgreSQL – Connection Pooling

$
0
0

This blog is a continuation of a series of blog posts to share best practices for improving performance and scale when using Azure Database for PostgreSQL service. In this post, we will focus on the benefits of using connection pooling and share our recommendations to improve connection resiliency, performance, and scalability of applications running on Azure Database for PostgreSQL. If you have not read the previous performance best practice blogs in the series, we would highly recommend reading the following blog posts to learn, understand, and adopt the recommended best practices for using Azure Database for PostgreSQL service.

In PostgreSQL, establishing a connection is an expensive operation. This is attributed to the fact that each new connection to the PostgreSQL requires forking of the OS process and a new memory allocation for the connection. As a result, transactional applications frequently opening and closing the connections at the end of transactions can experience higher connection latency, resulting in lower database throughput (transactions per second) and overall higher application latency. It is therefore recommended to leverage connection pooling when designing applications using Azure Database for PostgreSQL. This significantly reduces connection latency by reusing existing connections and enables higher database throughput (transactions per second) on the server. With connection pooling, a fixed set of connections are established at the startup time and maintained. This also helps reduce the memory fragmentation on the server that is caused by the dynamic new connections established on the database server.

The connection pooling can be configured on the application side if the app framework or database driver supports it. If that is not supported, the other recommended option is to leverage a proxy connection pooler service like PgBouncer or Pgpool running outside the application and connecting to the database server. Both PgBouncer and Pgpool are developed by the community and can be used with Azure Database for PostgreSQL. As we continue on, we will focus our conversation on PgBouncer in the context of real user experiences.

PgBouncer is a lightweight connection pooler that can be installed on the virtual machine (VM) running the application. The application connects to the PgBouncer proxy service running locally on the VM while PgBouncer service in-turn connects to the Azure Database for PostgreSQL service using the credentials and configuration settings specified in the pgbouncer.ini file. The maximum number of connections and default pool size can be defined in the configuration settings in pgbouncer.ini. Connection pooling comparison diagram

If your application is containerized and running on Azure Kubernetes Service (AKS), you can run PgBouncer as a sidecar proxy. As part of our commitment to provide native integration of best in class OSS databases with Azure’s industry leading ecosystem, we have published a PgBouncer sidecar proxy image in Microsoft container registry. PgBouncer sidecar runs with the application container in the same pod in AKS and provides connection pooling proxy to Azure Database for PostgreSQL. If the application container fails over or restarts, the sidecar container will follow thereby providing high availability with connection resiliency and predictable performance. Visit the docker hub page to learn more on how to access and use this image. For best practices around development with Azure Kubernetes Services, we would recommend to follow the documentation, “Connecting Azure Kubernetes Service and Azure Database for PostgreSQL.”

To give some estimates of the performance improvement when using PgBouncer for connection pooling with Azure Database for PostgreSQL, we ran a simple performance benchmark test with pgbench. pgbench provides a configuration setting to create new connection for every transaction so we leveraged that to measure the impact of connection latency on throughput of the application. The following are the observations with A/B testing comparing throughput with standard pgbench benchmark testing with and without PgBouncer. In the test, we ran pgbench with scale factor of 5 against Azure Database for PostgreSQL running on general purpose tier with 2 vCores (GP_Gen5_2). The only variable during the tests was PgBouncer. With PgBouncer, the throughput improved 4x times as shown below while connection latency was reduced by 40 percent.

Performance improvement with PgBouncer column chart

PgBouncer, with its built-in retry logic can further ensure connection resiliency, high availability, and transparent application failover during the planned (scheduled/scale-up/scale-down) or unplanned failover of the database server. The retry logic is found to be very useful for OSS applications like CKAN or Apache Airflow using SQLAlchemy. Without the use of PgBouncer, the database failover events require the application service to be restarted for connections to be re-established following a connection failure. In this scenario, it is also important to set connection timeout sufficiently higher than the retry interval to allow retry attempts to proceed before timing out.

To summarize, as new connections are an expensive operation with PostgreSQL, especially for applications which opens new connections frequently, we highly recommend using connection pooling while running applications against Azure Database for PostgreSQL. If the application is not designed to leverage connection pooling out of the box you can leverage PgBouncer as a connection pooling proxy. The benefits of running application with PgBouncer proxy are:

  • Improved throughput and performance
  • No connection leaks by defining the maximum number of connections to the database server
  • Improved connection resiliency against restarts
  • Reduced memory fragmentation

We hope that you are taking advantage of Azure Database for PostgreSQL. Please continue to provide feedback on the features and functionality that you want to see next. If you need any help or have questions, please check out the “Azure Database for PostgreSQL Documentation.” You can also reach out to us by emailing the Ask Azure DB for PostgreSQL alias, and be sure to follow us on Twitter @AzureDBPostgres and #postgresql for the latest news and announcements.

Acknowledgements

Special thanks to Diana Putnam, Rachel Agyemang, Sudhakar Sannakkayala, Sunil Agrawal, Sunil Kamath, Bhavin Gandhi, Anitah Cantele, and Prabhat Tripathi for their contributions to this posting.

Azure Marketplace new offers – Volume 31

$
0
0

We continue to expand the Azure Marketplace ecosystem. From January 1 to January 15, 2019, 67 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Virtual machines

Akumo Software

Akumo Software: Akumo Software's platform extends datacenter environments between virtualized or cloud-based infrastructure. It provides a consistent and simple way to cost-effectively manage an on-demand datacenter.

BlogEngine.NET on Windows Server 2016

BlogEngine.NET on Windows Server 2016: BlogEngine.NET is a lightweight, simple, user-friendly blog engine that can be an excellent alternative to WordPress. Easy to modify and extend, it is specifically designed for .NET developers.

BlogEngine.NET on Windows Server 2019

BlogEngine.NET on Windows Server 2019: BlogEngine.NET is a lightweight, simple, user-friendly blog engine that can be an excellent alternative to WordPress. Easy to modify and extend, it is specifically designed for .NET developers.

Conductor4SQL Central Server

Conductor4SQL Central Server: This virtual machine comes with all the components required for using Conductor4SQL, including Windows Server 2016, Microsoft SQL Server 2017, Microsoft Power BI Desktop, and Conductor4SQL.

Dell EMC NetWorker Virtual Edition 18.2

Dell EMC NetWorker Virtual Edition 18.2: Dell EMC NetWorker software provides fast, efficient backup and recovery for enterprise applications and databases.

F5 BIG-IP Cloud Edition

F5 BIG-IP Cloud Edition: This edition is comprised of per-app VEs and BIG-IQ centralized management. The former provides intelligent traffic management and web application firewall security, while the latter delivers deployment automation, management, and visibility.

GnuCash on Windows Server 2016

GnuCash on Windows Server 2016: GnuCash helps you to track your bank accounts, income, stocks, expenditures, and more. Users have the freedom to run, copy, distribute, study, change, and improve the software. It also works on mobile operating systems.

GnuCash on Windows Server 2019

GnuCash on Windows Server 2019: GnuCash helps you to track your bank accounts, income, stocks, expenditures, and more. Users have the freedom to run, copy, distribute, study, change, and improve the software. It also works on mobile operating systems.

IIS on Windows Server 2019

IIS on Windows Server 2019: Key features of IIS on Windows Server 2019 include wildcard host headers, IIS administration PowerShell cmdlets, and improved coalescing of connections to deliver an uninterrupted and properly encrypted browsing experience.

Incorta Free Trial

Incorta Free Trial: With Incorta’s Direct Data Mapping engine, you get real-time aggregation of large, complex business data without needing a data warehouse.

InterSystems IRIS Community Edition

InterSystems IRIS Community Edition: InterSystems IRIS is a complete data platform that provides developers the freedom to choose the language and data model best suited to rapidly develop their applications.

InterSystems IRIS Express Edition

InterSystems IRIS Express Edition: InterSystems IRIS is a complete data platform that provides developers the freedom to choose the language and data model best suited to rapidly develop their applications. See additional pricing options for this edition.

Neo4j Enterprise VM Version 3.5

Neo4j Enterprise VM Version 3.5: Neo4j's graph database platform helps organizations make sense of their data by revealing how people, processes, locations, and systems are interrelated. This approach powers apps tackling AI, fraud detection, master data, and recommendations.

NetScaler MA Service Agent 13.0

NetScaler MA Service Agent 13.0: The NetScaler MA Service agent software works as an intermediary between the NetScaler Management and Analytics Service and the NetScaler instances within Microsoft Azure.

Nginx on Windows Server 2016

Nginx on Windows Server 2016: Features of Nginx on Windows Server 2016 include reverse proxy with caching, IPv6, load balancing, FastCGI support with caching, WebSockets, TLS/SSL with SNI, and the handling of static files, index files, and automatic indexing.

OpenCart on Windows Server 2016

OpenCart on Windows Server 2016: Written in PHP, OpenCart is a free, open-source e-commerce platform available under the GNU General Public License, which allows end users to modify the software.

OpenCart on Windows Server 2019

OpenCart on Windows Server 2019: Written in PHP, OpenCart is a free, open-source e-commerce platform available under the GNU General Public License, which allows end users to modify the software.

Puppet Enterprise

Puppet Enterprise: Puppet Enterprise lets you automate the entire lifecycle of your Azure infrastructure simply and securely, from initial provisioning through application deployment.

Pyramid 2018 - Windows Server

Pyramid 2018 - Windows Server: Pyramid 2018 lets business users do high-end analytics and data science on any browser or device without needing IT help. It's the next generation of self-service analytics with governance.

SQL 2017 Enterprise Edition w ER Builder

SQL 2017 Enterprise Edition w/ ER/Builder: With the ER/Builder data modeler for SQL 2017 on Windows Server 2016, you can manage an unlimited number of tables. You can also create an index, triggers, keys, stored procedures, views, generators, and domains.

SQL Server 2017 Standard Edition w ER Builder

SQL Server 2017 Standard Edition w/ ER/Builder: With the ER/Builder data modeler for SQL 2017 on Windows Server 2016, you can manage an unlimited number of tables. You can also create an index, triggers, keys, stored procedures, views, generators, and domains.

SQL Server 2017 Web Edition w ER Builder

SQL Server 2017 Web Edition w/ ER/Builder: With the ER/Builder data modeler for SQL 2017 on Windows Server 2016, you can manage an unlimited number of tables. You can also create an index, triggers, keys, stored procedures, views, generators, and domains.

Strokk Webservices Demo

Strokk Webservices Demo: Wherever a password is used in a web form or an internal application, that piece of knowledge-based authentication can be hardened almost transparently with a behavioral biometrics second factor called keystroke dynamics.

Untangle NG Firewall

Untangle NG Firewall: Use NG Firewall to connect remote locations and ensure safety, reliability, and performance while providing protection for your data, applications, and users.

Varnish Enterprise 6

Varnish Enterprise 6: Varnish Enterprise (VE), previously known as Varnish Plus, is our commercial/enterprise version of the popular open-source HTTP engine/reverse HTTP proxy Varnish Cache (VC).

VyOS 1.2 LTS

VyOS 1.2 LTS: VyOS is a Linux-based open-source network operating system for routers and firewalls.

Windows Virtual Desktop

Windows Virtual Desktop: With Windows Virtual Desktop, Microsoft Office and Windows can be deployed and scaled on Azure in a few moments, including compliance and built-in security.

WordPress on Windows Server 2016

WordPress on Windows Server 2016: Quickly deploy WordPress on Windows 2016 with built-in MySql and phpMyAdmin. Host as many websites or applications as you need.

Xeams on CentOS

Xeams on CentOS: Get this secure and powerful mail server with a strong junk-filtering engine on CentOS. Xeams Community Edition is available as a free software supporting multiple platforms and all mail servers with smart-host functionality.

Xeams on Ubuntu

Xeams on Ubuntu: Get this secure and powerful mail server with a strong junk-filtering engine on Ubuntu. Xeams Community Edition is available as a free software supporting multiple platforms and all mail servers with smart-host functionality.

Xeams on Windows Server 2016

Xeams on Windows Server 2016: Get this secure and powerful mail server with a strong junk-filtering engine on Windows Server 2016. Xeams Community Edition is available as a free software supporting multiple platforms and all mail servers with smart-host functionality.

Xeams on Windows Server 2019

Xeams on Windows Server 2019: Get this secure and powerful mail server with a strong junk-filtering engine on Windows Server 2019. Xeams Community Edition is available as a free software supporting multiple platforms and all mail servers with smart-host functionality.

Web applications

Aggregion Blockchain Node

Aggregion Blockchain Node: Aggregion operates a blockchain ecosystem enabling major copyright holders to fully control their global end-to-end distribution networks and licensing of digital content. Microsoft Azure products enhance the Aggregion blockchain platform.

Archive One

Archive One: Archive One is a document management system designed to help document administrators classify, store, secure, search for, and retrieve essential company records. Make compliance and audits easy with Archive One.

Drupal with Azure Database for MariaDB

Drupal with Azure Database for MariaDB: This solution uses a virtual machine for the application front end and the Azure Database for MariaDB service for the application data. Drupal is an open-source content management system used to create websites and apps.

Lavelle Networks ScaleAon SD-WAN

Lavelle Networks ScaleAon SD-WAN: Lavelle Networks ScaleAon SD-WAN is a hybrid WAN. ScaleAon SD-WAN accelerates cloud adoption for enterprises by seamlessly extending the wide area network (WAN) across physical and virtual resources.

Lightning Network for Azure

Lightning Network for Azure: This distribution provides a virtual machine instance that runs Bitcoin (btcd), Litecoin (ltcd), and Lightning Network (lnd or c-lightning). You can also run BTCPayServer as a sample application for the node network.

MariaDB Galera Cluster

MariaDB Galera Cluster: MariaDB Galera is a multi-master database cluster solution for synchronous replication and high availability. This solution uses multiple virtual machines to replicate your data in a configurable number of nodes.

Mediant VE Session Border Controller (SBC)

Mediant VE Session Border Controller (SBC): Enable Microsoft Teams Direct Routing or connect SIP trunks to Skype for Business Server. AudioCodes’ Mediant session border controllers make deployment easier and help users set up multi-SBC network interfaces.

Spanning Backup For Office 365

Spanning Backup For Office 365: Spanning Backup for Office 365 provides automatic daily backup and recovery for Office 365 mail, calendars, OneDrive, and SharePoint.

Surge Identity (SaaS)

Surge Identity (SaaS): Surge Identity is a cloud-based identity solution that enables secure sign-in using trusted identity and social providers, and it secures app-to-app communication using the latest industry security standards.

Tidal Migrations -Premium Insights for Source Code

Tidal Migrations -Premium Insights for Source Code: Tidal Migrations provides your team with a simple, fast, and cost-effective cloud migration management solution. This add-on empowers your team with actionable insights on the apps you plan to refactor or re-platform to Azure.

TimeXtender Discovery Hub

TimeXtender Discovery Hub: This virtual machine runs Windows Server 2016 and the TimeXtender Discovery Hub. The Discovery Hub application server for Azure allows customers to build, deploy, and manage an enterprise-grade analytical architecture.

Vnomic Management for SAP Workloads

Vnomic Management for SAP Workloads: Select your SAP HANA workload requirements without worrying about underlying technical details. Vnomic will automatically compute and provision a complete and validated SAP HANA workload and deliver it on Azure in minutes.

Consulting services

2008 Windows SQL End of Support Workshop - 2 days

2008 Windows/SQL End of Support Workshop - 2 days: End of support is looming for Windows 2008 and SQL Server 2008, and Piksel Group's workshop is here to help you understand your options and create an action plan.

2-Hour Azure Migration Briefing

2-Hour Azure Migration Briefing: This briefing by Flat Rock Technology will provide high-level information on what it takes to migrate to the cloud and to Microsoft Azure in particular.

Airlines ChatBot- 3 week implementation

Airlines ChatBot: 3 week implementation: This is a conversational AI implementation for airlines over Amadeus/Sabre supporting multiple channels and covering flight booking, status, disruption notification, check-in, boarding passes, FAQs, and more.

Azure Analytics 5-Day Readiness Assessment (UK)

Azure Analytics 5-Day Readiness Assessment: Pythian Kick Analytics-as-a-Service puts the power of data analytics in the hands of your business users and solves the data silo problem. This five-day assessment is for customers in Canada.

Azure Analytics 5-Day Readiness Assessment (USA)

Azure Analytics 5-Day Readiness Assessment (UK): Pythian Kick Analytics-as-a-Service puts the power of data analytics in the hands of your business users and solves the data silo problem. This five-day assessment is for U.K. customers.

Azure Analytics 5-Day Readiness Assessment

Azure Analytics 5-Day Readiness Assessment (USA): Pythian Kick Analytics-as-a-Service puts the power of data analytics in the hands of your business users and solves the data silo problem. This five-day assessment is for U.S. customers.

Azure Design Assessment 4-Day Assessment

Azure Design Assessment: 4-Day Assessment: CDW will review your Microsoft Azure environment to verify configuration and provide recommendations according to best practices.

Azure Governance Workshop 5-Day Workshop

Azure Governance Workshop: 5-Day Workshop: CDW will provide an in-depth look at the people, processes, and technology currently in place and document a governance plan that enables IT professionals to effectively support business needs.

Azure Jumpstart 3-Day Implementation

Azure Jumpstart: 3-Day Implementation: CDW will help your organization choose and implement virtual networking technology. Learn the best approaches to deploying virtual machines, including the associated cloud services and storage accounts.

Azure migration and transformation two-day workshop

Azure migration & transformation two-day workshop: In this Azure migration and transformation workshop, Piksel Group will identify candidate cloud services and the benefits of moving to Azure.

Azure migration and transformation briefing (3h)

Azure migration and transformation briefing (3h): Select from a range of Azure migration and transformation consultancy, implementation, and managed services, starting with a cloud briefing and initial cloud readiness assessment.

Azure Migration Assessment 2-Day Assessment

Azure Migration Assessment: 2-Day Assessment: CDW will work with you to deploy an assessment tool in your environment, ensure the tool is configured properly, run the tool, and help review and interpret the results.

Azure transformation five-day proof of concept

Azure transformation five-day proof of concept: This Azure migration and transformation briefing, assessment, planning, and proof-of-concept activity by Piksel Group will identify and validate candidate services and benefits.

Cloud Operations and Monitoring 3-day Assessment

Cloud Operations & Monitoring: 3-day Assessment: Objektkultur Software will plot a strategy for your migration to the cloud and will support you in your change management process, enabling a conversion that integrates into your system landscape.

Connecting with ExpressRoute 2 Day Implementation

Connecting with ExpressRoute: 2 Day Implementation: Your organization will first need to engage with a WAN provider that supports ExpressRoute connectivity. CDW will configure the virtual network and gateway and will assist in establishing an ExpressRoute connection.

Cyber Security PEN Testing 4 Week Assessment

Cyber Security PEN Testing: 4 Week Assessment: Networks come under attack every day, and these attacks can disrupt business, create chaos, and cause reputational damage. A penetration test helps you understand how threat actors might penetrate your network.

Envisioning AI for IoT data 2-day Workshop

Envisioning AI for IoT data: 2-day Workshop: This offer by TheDataTeam is for an AI envisioning workshop conducted at the client's site for discovering use cases that are of immense business value and solvable using Azure and TheDataTeam's Intellegion platform.

EOS Migration Pilot 8-Wk Implementation

EOS Migration Pilot: 8-Wk Implementation: This is a pilot migration of legacy Windows Server workloads to Azure using both discovery and containerization tooling.

Leadership Development Solution

Leadership Development Solution: The Leadership Development Solution helps K-12 organizations make informed decisions. It provides strategic education services along with an Azure data warehouse and visualizations to improve the leadership placement process.

Migrate to Azure at 20 Percent of Consumption 8-wk Impl

Migrate to Azure at 20% of Consumption 8-wk Impl: Migrate your workloads and apps to Azure at just 20 percent of your Azure consumption for the first year. This package by NetEnrich combines our tools expertise with our knowledge of datacenter and app migrations.

Free SAP Sandbox to Azure MIgration 2-Week POC

SAP on Azure / QAS Migration Service: 2-Week POC: This proof of concept involves free Azure consumption credits and a migration service offered by MSR IT Services covering SAP QAS/Sandbox landscapes.

SpotLITE Discovery for Azure MSP 2 WKS Assessment

SpotLITE Discovery for Azure MSP: 2 WKS Assessment: Green House Data’s SpotLITE discovery process is designed to determine the overall health of your IT systems and outline a plan to improve operational performance and include Azure.

SSO Using ADFS and ADConnect 3-Day Implementation

SSO Using ADFS and ADConnect: 3-Day Implementation: Integrate your on-premises directories with Azure Active Directory to provide a common identity for access to both cloud and on-premises resources, simplifying things for your end users.

New capabilities in Microsoft 365 empower healthcare professionals

Colfax amplifies the power of its ESAB product portfolio with IoT

$
0
0

If you’ve welded two pieces of metal together, chances are you’ve used equipment from ESAB, a business unit of Colfax Corporation. Started in Sweden in 1904, ESAB was acquired by Colfax in 2012, and offers a broad portfolio of welding, cutting, and gas management equipment to customers in virtually every industry across the globe including agriculture, building construction, energy, light and heavy manufacturing, transportation, and even medical and hospitals. Wherever things are made, ESAB is there.

 098

With the evolution of the Internet of Things (IoT), Colfax saw an opportunity to transform its businesses. What was unique about Colfax’s IoT initiative – named Data Driven Advantage (DDA) –  was their vision of enabling customers to leverage the extensive ESAB portfolio.

Leveraging synergies of a broad portfolio

Many of ESAB’s customers manufacture highly configurable products like tractors, mining equipment, wind towers, and agricultural feed tanks. Imagine the complex process steps involved. First, hundreds of metal pieces are cut and placed on shelves. Then the parts move through the factory. At each step, there is labor required to set up a machine, fuse or cut parts, check for quality, and prepare for the next job. Workers grind metal, weld parts, and refill filler metal. Welds are documented extensively, often manually, for traceability in critical applications like shipbuilding where the paperwork can literally fill a shipping container.

“If customers can leverage the broad ESAB portfolio of welding equipment, power supplies, filler metals, welding tips, mechanized cutters – even helmets, gloves, and protective gear – in a connected way, they can gain incredible insights across their entire manufacturing processes,” explained Ryan Cahalane, Vice President of Digital Growth, Colfax.

Challenges on the IoT journey

Similarly to other companies, Colfax ran into challenges on its IoT journey, including educating executives, dealing with legacy products, ensuring their IoT solution integrated with existing distribution channels and sales teams, and a lack of coordination and inconsistent technology choices across business units.

After years in pilot purgatory, the company accelerated their digital growth success when they refocused on their core differentiation, gaining efficiencies from having a common technology platform across businesses, and leveraging ESAB’s deep application expertise and wide portfolio to unlock value for their customers. As part of this refocus, the company exited the business of developing a custom IoT platform. They selected PTC Thingworx for Azure and the Microsoft Azure IoT platform. “Deploying ThingWorx for Azure provides businesses like Colfax with a much more holistic offering around digital transformation,” said Ron Salvador, Senior Director at PTC.

The re-platforming started in December 2018, and by the HMI 2018 event, ESAB had a field-ready prototype of WeldCloud™ based on Thingworx and Azure that they began testing with customers. By FabTech in the fall of 2018, they had expanded the portfolio to include cutting applications with CutCloud™ and began rolling out the common Thingworx/Azure reference architecture to other business units’ products like orbital welding and gas monitoring.

Customers gain immediate benefits

Customers have seen immediate benefits. “With insights from WeldCloud™, one of our customers realized that they were using one station’s equipment less than 10 percent of the available time, yet it bottlenecked the overall process,” said Ludvig Enlund, GM and head of ESAB’s DDA initiative. “Plus, they were staffing it with expensive labor where the work didn’t require that level of expertise, meanwhile taking away from other more critical processes where higher skill was required. With this awareness, they rebalanced their processes, adjusted labor, and helped gain significant throughput on a line where every product made can be sold.”

With ESAB Digital Solutions, customers now have data to understand how processes, labor, and material contribute to the cost of each part. “In a job shop environment, customers are able to improve their quoting process, increase profit, and win even more business,” said Enlund. Not only do ESAB Digital Solutions like WeldCloud™ and CutCloud™ make otherwise impossible productivity improvements possible, they also give everyone involved in the operation the chance to elevate their performance with data. Some examples include:

•    Operations managers can identify new productivity drivers
•    Quality engineers can trace a defective weld and determine the correct repairs and how to prevent future issues
•    Service technicians can be proactively alerted if equipment has an anomaly or breaks down, allowing for preventative and even predictive maintenance
•    Welding engineers can use data to more quickly set up new test welding processes for new applications

Additionally, ESAB’s approach to digital transformation parallels their collaborative approach to customer preferences, where their broad portfolio provides value even where competitive equipment may be preferred. “Welding may be a science, but its practitioners consider good welding an art and become very attached to the tools they know,” said Enlund. “With the Universal Connector interface and Weld Quality (HKS), WeldCloud™ can easily work with other brands, and deliver most of the same value.”

Partnering to deliver customer value

As ESAB moves forward, it plans to continue focusing on partnerships that deliver customer value, having seen success from close relationships with Microsoft and PTC. “We are taking a page out of the Microsoft playbook,” said Cahalane. “The world is a different place. It’s moving fast. Competitors can become collaborators, and collaboration is key.” The company will continue to expand its Data Driven Advantage initiative across its large portfolio and is now piloting the same common technology platform in its own operations. They expect that the power of standard tools, consistent data models, and modern analytics will increase their own productivity, improve quality, and potentially enable new insights by closing the loop between products in the field and the operations that make them.

Simplify Always On availability group deployments on Azure VM with SQL VM CLI

$
0
0

Always On availability groups (AG) provide high availability and disaster recovery capabilities to your SQL Server database, whether on-premises, in the cloud, or a combination of both. Manually deploying an availability group for SQL Server on Azure Virtual Machines (VM) is a complex process that requires understanding of Azure’s infrastructure, but new enhancements have greatly simplified the process.

We recently published a new method to automate Always On AG deployments on Azure VM with SQL Virtual Machine Resource Provider via Azure quickstart templates. Today, we are proud to share that we have further simplified this automation with Azure SQL VM CLI, the management API for SQL VM resource provider.

Deploying an Always on AG configuration for SQL Server on Azure VM is now possible with following simple steps.

Define Windows Failover Cluster metadata

az sql vm group manages the metadata about the Windows Failover Cluster service that will host the Always on AG. Cluster metadata includes the Active Directory (AD) domain, cluster accounts, and the storage account to be used as cloud witness and SQL Server version. Use az sql vm group create to define the Windows Failover Cluster metadata so that when the first VM is added, the cluster will be created as defined. An example command is provided below.

az sql vm group create -n <cluster name> -l <region ex:eastus> -g <resource group name> --image-offer <SQL2016-WS2016 or SQL2017-WS2016> --image-sku Enterprise --domain-fqdn <FQDN ex: domain.com> --operator-acc <domain account ex: testop@domain.com> --bootstrap-acc <domain account ex:bootacc@domain.com> --service-acc <service account ex:testservice@domain.com> --sa-key ‘<PublicKey>’ --storage-account ‘<ex:https://cloudwitness.blob.core.windows.net/>’

We only support AD domain joined Windows Failover Cluster definition. The FQDN is a must have property and all AG replicas should already be joined to the AD domain before they are added to the cluster.

You can use any existing storage account as a Cloud Witness in the cluster, or you can create a new storage account. An example an Azure CLI command to create the storage account is below:

az storage account create -n <name> -g <resource group name> -l <region ex:eastus> --sku Standard_LRS --kind StorageV2 --access-tier Hot --https-only true

Add SQL VMs to the Cluster – Adding the first VM will create the cluster

az sql vm add-to-group manages adding AG replicas to the Windows Failover Cluster defined above. The cluster is created when the first VM is added to the group. Installing of cluster role on the VM and creating the cluster with the given name is automated by this command. Following add-to-group calls will add next replica to the Cluster.

az sql vm add-to-group -n <VM Name> -g <Resource Group Name> --sqlvm-group <cluster name> -b <bootstrap account password> -p <operator account password> -s <service account password>

You can deploy a new SQL VM instance from Enterprise SQL Server 2016 or 2017 images on Azure Marketplace to use as AG replicas. If you deploy SQL VM from the Azure portal, then it will have the SQL IaaS extension installed and registered with SQL VM RP by default. If you deploy with Azure Power Shell, CLI, or from a non-SQL Server image, you will need to manually follow these steps:

  1. Install the SQL IaaS extension on the virtual machine.
  2. Create a SqlVirtualMachine resource associated with the VM with az sql vm create. An example of this is below:
az sql vm create -n <VM Name> -g <Resource Group Name> -l <region ex:eastus>

You can add an existing SQL VM to the cluster as AG replicas if these prerequisites are followed.

Create an Availability Group through SSMS

Once all SQL VMs are added to the cluster, you can log in to one of them and setup the availability group through SSMS new Availability Group Wizard. At this point, creating the availability group is very simple as all replicas are already added to the cluster.

Create an Availability Group Listener

The last step in the Always On AG configuration is creating an AG Listener to enable automated connection routing after a failover. You can create an AG Listener with the az sql vm ag-listener create command, as shown below.

az sql vm group ag-listener create -n <listener name> -g <resource group name> --ag-name <availability group name> --group-name <cluster name> --ip-address <ag listener IP address> --load-balancer {lbname} --probe-port <Load Balancer probe port, default 59999>  --subnet {subnet resource id} --sqlvms <names of SQL VM’s hosting AG replicas ex: sqlvm1 sqlvm2>

AG Listener requires an Internal Load Balancer (ILB) on Azure VMs. If your SQL VMs are in the same availability set, then you can use a Basic ILB, otherwise you need to use a Standard ILB. You can create the ILB via Azure CLI as shown in the example below.

az network lb create --name <ILB name> -g <resource group name> --sku Standard --vnet-name <VNet Name> --subnet <subnet name>

That is all to deploy SQL Server on Azure Virtual Machines with Always On AG Configuration. Start taking advantage of these expanded capabilities enabled by SQL VM resource provider and Azure SQL VM CLI today. If you have a question or would like to make a suggestion, you can contact us through UserVoice. We look forward to hearing from you!

A better multi-monitor experience with Visual Studio 2019

$
0
0

Visual Studio 2019 now supports per-monitor DPI awareness (PMA) across the IDE. PMA support means the IDE and more importantly, the code you work on appears crisp in any monitor display scale factor and DPI configuration, including across multiple monitors.

Visual Studio 2019 (left) with system scaling vs Visual Studio 2019 (right) with the PMA option enabled.

If you have used Visual Studio across monitors with different scale factors or remoted into a machine with a different configuration than the host device, you might have noticed Visual Studio’s fonts and icons can become blurry and in some cases, even render content incorrectly. That’s because versions prior to Visual Studio 2019 were set to render as a system scaled application, rather than a per-monitor DPI aware application (PMA).

System scaled applications render accurately on the primary display as well as others in the same configuration but have visual regressions such as blurry fonts and images when rendering on displays with different configurations. When working for extended periods of time, these visual regressions can be a distraction or even a physical strain.

Visual Studio 2019 Preview 1 included the core platform support for per-monitor DPI awareness and Preview 2 includes additional fixes for usability issues around scaling, positioning and bounding (e.g. content renders within the bounds of tool windows). Preview 2 also adds several more popular tool windows that now correctly handle per-monitor DPI awareness.

How to enable PMA for Visual Studio 2019

The easiest way to try the new PMA functionality is on Visual Studio 2019 Preview 2. You’ll need to have the Windows 10 April 2018 Update or a newer build installed along with the latest version of .NET Framework 4.8. If you’re still running Preview 1 then you also need to enable “Optimize rendering for screens with different pixel densities” in the Preview Features node of the Tools -> Options dialog.

There are many features where you’ll start to see Visual Studio render clear fonts and crisp images. Here’s a few of the most used UI in Visual Studio where you should notice a difference.

  • Core Shell
  • Menus and context menus
  • Most code editors
  • Solution Explorer
  • Team Explorer
  • Toolbox
  • Breakpoints
  • Watch
  • Locals
  • Autos
  • Call Stack

Visual Studio 2019 Preview 2 also fixes some of the usability issues affecting UI positioning, scaling and content bounding that were discovered in Preview 1.

Our goal is to have per-monitor awareness working across the most used features by the time we ship Visual Studio 2019. In future updates, we’ll continue enabling PMA across more areas and look forward to your feedback.

Tell us what you think!

We thank you for your ongoing feedback, and encourage you to install the latest Visual Studio 2019 preview, enable the PMA functionality, and tell us about your experiences through the Developer Community portal. Please upvote PMA related asks or create new ones whenever you feel a specific component (tool window, dialog, etc.) or issue has not being reported.

Reporting your experience alongside your display configurations, PMA feature state (on/off) and for bonus points, any screenshot or video showing the affected areas will help us resolve issues faster, and account for as many use-cases as possible.

Ruben Rios, Program Manager, Visual Studio
@rub8n

Ruben is a Program Manager on the Visual Studio IDE platform team. During his time at Microsoft, he’s helped build tools and services for web & mobile devs in both Visual Studio and the Microsoft Edge F12 dev tools. Before joining Microsoft, he was a professional web developer and has always been passionate about UX.


Announcing an easier way to use latest certificates from Key Vault

$
0
0

Posting on behalf of Prashanth Yerramilli

When we launched Azure Key Vault a few years ago, it solved a major problem users had which was that storing sensitive and/or secret information in code or config files in plain text causes multiple problems including security exposure. Users stored their secrets in a safe store like Key Vault and used a URI to fetch the secret material. This service has been wildly popular and has become a standard for cloud applications. It is used by fledling startups to Fortune 500 companies world over.

Developers use Key Vault to store their adhoc secrets, certificates and keys used for encryption. And to follow best security practices they create secrets that are short lived. An example of typical flow in this case could be

  • Step 1: Developer creates a certificate in Key Vault
  • Step 2: Developer sets the lifetime of the secret to be 30 day. In other words developer asks Key Vault to re-create the certificate every 30 days. Developer also chooses to receive an email when a certificate is about to expire
  • Step 3: Developer writes a polling service to check if the certificate has indeed expired

In the above scenario there are few challenges for the customer. They would have to write a polling service that constantly checks if the certificate has expired and if so they wait for the new certificate and then bind it in Windows Certificate manager.
Now what if developer doesn’t have to poll. And also if the developer doesn’t have to bind the new certificate in Windows Certificate manager. To solve this exact problem we built a Key Vault Virtual Machine Extension.

Azure virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, anti-virus protection, or to run a script inside of it, a VM extension can be used. Azure VM extensions can be run with the Azure CLI, PowerShell, Azure Resource Manager templates, and the Azure portal. Extensions can be bundled with a new VM deployment, or run against any existing system.
To learn more about VM Extensions please click here

Key Vault VM Extension is supposed to do just that as explained in the steps below

  • Step 1: Create a Key Vault and create an Azure Windows Virtual Machine
  • Step 2: Install the Key Vault VM Extension on the VM
  • Step 3: Configure Key Vault VM Extension to monitor a specific vault by specifying how often it should fetch the certificate

By doing the above steps the latest certificate is bound correctly in Windows Certificate Manager. This feature enables auto-rotation of SSL certificates, without necessitating a re-deployment or binding.

In the lifecycle of secrets management fetching the latest version of the secret (for the purpose of this article a certificate) is just as important as storing it securely. To solve this problem, on an Azure Virtual Machine, we’ve created a VM Extension for Windows. A Linux version is coming soon.
Virtual Machine Extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. In this case the Key Vault Virtual Machine extension once installed fetches the latest version of the certificate at a specified interval and automatically binds the latest version of the certificate in the certificate store on Windows. As you can see this feature enables auto-rotation of SSL certificates, without necessitating a re-deployment or binding.

Also before we begin going through the tutorial, we need to understand a concept called Managed Identities.
Your code needs credentials to authenticate to cloud services, but you want to limit the visibility of those credentials as much as possible. Ideally, they never appear on a developer’s workstation or get checked-in to source control. Azure Key Vault can store credentials securely so they aren’t in your code, but to retrieve them you need to authenticate to Azure Key Vault. To authenticate to Key Vault, you need a credential! A classic bootstrap problem. Through the magic of Azure and Azure AD, MI provides a “bootstrap identity” that makes it much simpler to get things started.

Here’s how it works: When you enable MI for an Azure resource such as a virtual machine, Azure creates a Service Principal (an identity) for that resource in Azure AD, and injects the credentials (of that identity) into the resource (in this case a virtual machine).

  1. Your code calls a local MI endpoint to get an access token
  2. MI uses the locally injected credentials to get an access token from Azure AD
  3. Your code uses this access token to authenticate to an Azure service

Managed Identities

Now within Managed Identities there are 2 types

  1. System Assigned managed identity is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that’s trusted by the subscription. The lifecycle of the identity is managed by Azure and is tied to the Azure service instance.
  2. User Assigned managed identity is created as a standalone Azure resource. Users first create an identity and then assign that identity to one or more Azure resources.

In this tutorial I will demonstrate how to create a Azure Virtual Machine with an ARM template which also includes creating a Key Vault VM Extension on the VM.

Prerequisites

Step 1

After the prerequisites are complete, create an System Assigned identity by following this tutorial

Step 2

Assign the newly created System Assigned identity to access to your Key Vault

  • Go to https://portal.azure.com and navigate to your Key Vault
  • Select Access Policies section and Add New by searching for the User Assigned identity
    AccessPolicies

Step 3

Create or Update a VM with the following ARM template
You can view full the ARM template here and the ARM Parameters file here.

The most minimal settings in the ARM template are shown below:

     {
            "secretsManagementSettings": {
                "observedCertificates": [
                    "<KeyVault URI of a secret to be monitored/retrieved, in versionless format: https://myVaultName.vault.azure.net/secrets/myCertName">,
                    "<more entries here>", 
                "pollingIntervalInS": "[parameters('kvvmextPollingInterval')]",
                ]
            }
        }

As you can see we only specify the observedCertificates parameter and polling Interval in seconds


Note: Your observedCertificates urls should be of the form:
https://myVaultName.vault.azure.net/secrets/myCertName 

and not:

https://myVaultName.vault.azure.net/certificates/myCertName 

Reason being the /secrets path returns the full certificate, inluding the private key, while the /certificates path does not.

By following this tutorial you can create a VM with the above specified template

The above tutorial assumes that you are storing your certificates on Windows Certificate Manager. And so the VM Extension pulls down the latest certificates at a specified interval and automatically binds those certificates in your certificate manager.

That’s all folks!

Linux Version: We’re actively working on a VM Extension for Linux and would love to hear any feedback you might have.

We are eager to hear from you about your use cases and how we can evolve the VM Extension to help you. So please reach out to us and add your feature requests to the Azure feedback forum. If you run into issues using the VM extension please reach out to us on StackOverflow.


Prashanth Yerramilli, Senior Program Manager, Azure Key Vault

Prashanth Yerramilli Profile Pic Prashanth Yerramilli is the Key Vault Program Manager on the Azure Security team. He has over 10 years of Software Engineering experience and brings to the team love for creating the ultimate development experience.

Prashanth can be reached at:
-Twitter @yvprashanth1
-GitHub https://github.com/yvprashanth

Create data visualizations like BBC News with the BBC’s R Cookbook

$
0
0

If you're looking a guide to making publication-ready data visualizations in R, check out the BBC Visual and Data Journalism cookbook for R graphics. Announced in a BBC blog post this week, it provides scripts for making line charts, bar charts, and other visualizations like those below used in the BBC's data journalism. 

BBC-graphics

The cookbook is based around the bbplot package (available on Github), which has been in use at the BBC since March 2018 for creating graphics in the BBC News graphics style. Rather than drafting graphics in R and then relying another graphics tool or a design team for publication, R has been a "game changer" for the BBC. It offers more creativity and control while saving time and effort, especially when the chart needs to be reproduced every time the underlying data change. The BBC offers a six-week course for its data journalists internally, and you can learn as well with the free cookbook published by the BBC, linked below.

Github (BBC): BBC Visual and Data Journalism cookbook for R graphics

 

 

Announcing ML.NET 0.10 – Machine Learning for .NET

$
0
0

alt text

ML.NET is an open-source and cross-platform machine learning framework (Windows, Linux, macOS) for .NET developers. Using ML.NET, developers can leverage their existing tools and skillsets to develop and infuse custom AI into their applications by creating custom machine learning models.

ML.NET allows you to create and use machine learning models targeting common tasks such as classification, regression, clustering, ranking, recommendations and anomaly detection. It also supports the broader open source ecosystem by proving integration with popular deep-learning frameworks like TensorFlow and interoperability through ONNX. Some common use cases of ML.NET are scenarios like Sentiment Analysis, Recommendations, Image Classification, Sales Forecast, etc. Please see our samples for more scenarios.

Today we’re announcing the release of ML.NET 0.10. ( ML.NET 0.1 was released at //Build 2018). Note that ML.NET follows a semantic versioning pattern, so this preview version is 0.10. There will be additional versions such as 0.11 and 0.12 before we release v1.0.

This release focuses on the overall stability of the framework, continuing to refine the API, increase test coverage and as an strategic milestone, we have moved the IDataView components into a new and separated assembly under Microsoft.Data.DataView namespace so it will favor interoperability in the future.

The main highlights for this blog post are described below in further details:

IDataView as a shared type across libraries in the .NET ecosystem

The IDataView component provides a very efficient, compositional processing of tabular data (columns and rows) especialy made for machine learning and advanced analytics applications. It is designed to efficiently handle high dimensional data and large data sets. It is also suitable for single node processing of data partitions belonging to larger distributed data sets.

For further info on IDataview read the IDataView design principles

What’s new in v0.10 for IDataView

In ML.NET 0.10 we have segregated the IDataView component into a single assembly and NuGet package. This is a very important step towards the interoperability with other APIs and frameworks.

Why segregate IDataView from the rest of the ML.NET framework?

This is a very important milestone that will help the ecosystem’s interoperability between multiple frameworks and libraries from Microsoft of third parties. By seggregating IDataView, different libraries will be able to reference it and use it from their API and allow users to pass large volumes of data between two independent libraries.

For example, from ML.NET you can of course consume and produce IDataView instances. But what if you need to integrate with a different framework by creating an IDataView from another API such as any “Data Preparation framework” library? If those frameworks can simply reference a single NuGet package with just the IDataView, then you can directly pass data into ML.NET from those frameworks without having to copy the data into a format that ML.NET consumes. Also, the additional framework wouldn’t depend on the whole ML.NET framework but just reference a very clean package limited to the IDataView.

The image below is an aspirational approach when using IDataView across frameworks in the ecosystem:

alt text

Another good example would be any plotting/charting library in the .NET ecosystem that could consume data using IDataView. You could take data that was produced by ML.NET and feed it directly into the plotting library without that library having a direct reference to the whole ML.NET framework. There would be no need to copy, or change the shape of the data at all. And there is no need for this plotting library to know anything about ML.NET.

Basically, IDataView can be an exchange data format which allows producers and consumers to pass large amounts of data in a standarized way.

For additional info check the PR #2220

Support for multiple ‘feature columns’ in recommendations (FFM based)

In previous ML.NET releases, when using the Field-aware Factorization Machine (FFM) trainer (training algorithm) you could only provide a single feature column like in this sample app

In 0.10 release we’ve added support for multiple ‘feature columns’ in your training dataset when using an FFM trainer by allowing to specify those additional column names in the trainer’s ‘Options’ parameter as shown in the following code snippet:

var ffmArgs = new FieldAwareFactorizationMachineTrainer.Options();

// Create the multiple field names.
ffmArgs.FeatureColumn = nameof(MyObservationClass.MyField1); // First field.
ffmArgs.ExtraFeatureColumns = new[]{ nameof(MyObservationClass.MyField2), nameof(MyObservationClass.MyField3) }; // Additional fields.

var pipeline = mlContext.BinaryClassification.Trainers.FieldAwareFactorizationMachine(ffmArgs);

var model = pipeline.Fit(dataView);

You can see additional code example details in this code

Additional updates in v0.10 timeframe

Support for returning multiple predicted labels

Until ML.NET v0.9, when predicting (for instance with a multi-class classification model), you could only predict and return a single label. That’s an issue for many business scenarios. For instance, in an eCommerce scenario, you could want to automatically classify a product and assign it to multiple product categories instead of just a single category.

However, when predicting, ML.NET internally already had a list of the multiple possible predictions with a score/proability per each in the schema’s data, but the API was simply not returning the list of possible predicted labels but a single one.

Therefore, this improvement allows you to access the schema’s data so you can get a list of the predicted labels which can then be related to their scores/proabilities provided by the float[] Score array in your Prediction class, such as in this sample prediction class.

For additional info check this code example

Minor updates in 0.10

  • Introducing Microsoft.ML.Recommender NuGet name instead of Microsoft.ML.MatrixFactorization name: Microsoft.ML.Recommender] is a better naming for NuGet packages based on the scenario (Recommendations) instead of the trainer’s name (Microsoft.ML.MatrixFactorization).

  • Added support in TensorFlow for using using text and sparse input in TensorFlow: Specifically, this adds support for loading a map from a file through dataview by using ValueMapperTransformer. This provides support for additional scenarios like a Text/NLP scenario) in TensorFlowTransform where model’s expected input is vector of integers.

  • Added Tensorflow unfrozen models support in GetModelSchema: For a code example loading an unfrozen TensorFlow check it out here.

Breaking changes in ML.NET 0.10

For your convenience, if you are moving your code from ML.NET v0.9 to v0.10, you can check out the breaking changes list that impacted our samples.

Instrumented code coverage tools as part of the ML.NET CI systems

We have also instrumented code coverage tools (using https://codecov.io/) as part of our CI systems and will continue to push for stability and quality in the code.

You can check it out here which is also a link in the home page of the ML.NET repo:

alt text

Once you click on that link, you’ll see the current code coverage for ML.NET:

alt text

Explore the community samples and share yours!!

As part of the ML.NET Samples repo we also have a special Community Samples page pointing to multiple samples provided by the community. These samples are not maintained by Microsoft but are very interesting and cover additional scenarios not covered by us.

Here’s an screenshot of the current community samples:

alt text

There are pretty cool samples like the following:

‘Photo-Search’ WPF app running a TensorFlow model exported to ONNX format

alt text

UWP app using ML.NET

alt text

Other very interesting samples are:

Share your sample with the ML.NET community!

alt text

We encourage you to share your ML.NET demos and samples with the community by simply submitting its brief description and URL pointing to your GitHub repo or blog posts, into this repo issue “Request for your samples!”.

We’ll do the rest and publish it at the ML.NET Community Samples page!

Planning to go to production?

alt text

If you are using ML.NET in your app and looking to go into production, you can talk to an engineer on the ML.NET team to:

  1. Get help implementing ML.NET successfully in your application.
  2. Demo your app and potentially have it featured on the .NET Blog, dot.net site, or other Microsoft channel.

Fill out this form and someone from the ML.NET team will contact you.

Get started!

alt text

If you haven’t already get started with ML.NET here.

Next, going further explore some other resources:

We will appreciate your feedback by filing issues with any suggestions or enhancements in the ML.NET GitHub repo to help us shape ML.NET and make .NET a great platform of choice for Machine Learning.

Thanks and happy coding with ML.NET!

The ML.NET Team.

This blog was authored by Cesar de la Torre and Eric Erhardt plus additional contributions of the ML.NET team

Microsoft Q# Coding Contest – Winter 2019

$
0
0

Are you new to quantum computing and want to improve your skills? Have you done quantum programming before and looking for a new challenge? Microsoft’s Quantum team is excited to invite you to the second Microsoft Q# Coding Contest, organized in collaboration with Codeforces.com.

The contest will be held March 1 through March 4, 2019. It will offer the participants a selection of quantum programming problems of varying difficulty. In each problem, you’ll write Q# code to implement the described transformation on the qubits or to perform a more challenging task. The top 50 participants will win a Microsoft Quantum T-shirt.

This contest is the second one in the series started by the contest held in July 2018. The first contest offered problems on introductory topics in quantum computing: superposition, measurement, quantum oracles and simple algorithms. The second contest will take some of these topics to the next level and introduce some new ones.

For those eager to get a head start in the competition, the warmup round will be held February 22-25, 2019. It will feature a set of simpler problems and focus on getting the participants familiar with the contest environment, the submission system and the problem format. The warmup round is a great introduction, both for those new to Q# or those looking to refresh their skills.

Another great way to prepare for the contest is to work your way through the Quantum Katas. They offer problems on a variety of topics in quantum programming, many of them similar to those used in the first contest. Most importantly, the katas allow you to test and debug your solutions locally, giving you immediate feedback on your code.

Q# can be used with Visual Studio, Visual Studio Code or command line on Windows, macOS or Linux, providing an easy way to start with quantum programming. Any of these platforms can be used in the contest.

We hope to see you at the second global Microsoft Q# Coding Contest!

Mariia Mykhailova, Senior Software Engineer, Quantum
@tcnickolas
Mariia Mykhailova is a software engineer at the Quantum Architectures and Computation group at Microsoft. She focuses on developer outreach and education work for the Microsoft Quantum Development Kit. In her spare time she writes problems for programming competitions and creates puzzles.

Top Stories from the Microsoft DevOps Community – 2019.02.08

$
0
0

Happy Friday! I hope you’ve had a great feel full of finding bugs, improving performance and keeping your services online. Now that you’re cruising into the weekend, it’s a good time to take a moment and read up on the state of DevOps. Here’s some great articles (and a podcast) that I found this week.

.NET Core Opinion #8 – How to Use Azure DevOps Pipelines
K. Scott Allen has been posting some great tips and tricks around .NET Core. In this post, he points out how to use Azure Pipelines to use a “configuration as code” setup for your CI/CD, building your project with a YAML definition.

Azure Pipelines! qué es y cómo te puede ayudar
So what is Azure Pipelines and how can it help you? Manuel Valenzuela has a great introduction. (Spanish language)

Using Azure DevOps Artifacts NuGet Feeds from Azure DevOps Pipeline Builds
The Azure DevOps teams packages its components into NuGet packages which – of course – we don’t publish publicly. Instead, we use a private NuGet repository. Travis Illig shows you how you can use Azure Artifacts with private NuGet packages from Azure Pipelines builds.

Azure DevOps — find your activity stream
Sahil Malik has a quick tip for users coming to Azure DevOps from Jira and who are used to the “activity stream”; he tells you how to find that in Azure Boards.

Migrate from Jenkins to Azure Pipelines
The Azure DevOps team just added some guides for people who are interested in migrating from Jenkins to Azure Pipelines. It goes great with our guidance for migrating from Travis; either way is a good first step to a migration to a modern CI pipeline.

Radio TFS: Azure DevOps with Azure Greg
Another great episode of the Radio TFS podcast. In this episode, your host Greg Duncan is joined by Gregor Suttie (aka Azure Greg) where they talk about the change to Azure DevOps, Azure exams, marketplace extensions, and much more.

As always, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure then let me know! I’m @ethomson on Twitter.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>