This blog was written in collaboration with Andrew Stevenson, CTO at Lenses.
Apache Kafka is one of the most popular open source streaming platforms today. However, deploying and running Kafka remains a challenge for most. Azure HDInsight addresses this challenge by providing:
Ease-of-use: Quickly deploy Kafka clusters in the cloud and integrate simply with other Azure services.
Higher scale and lower total-cost-of-operations (TCO): With managed disks, compute and storage are separated, enabling you to have 100s of TBs on a cluster.
Enhanced security: Bring your own key (BYOK) encryption, custom virtual networks, and topic level security with Apache Ranger.
But that’s not all – you can now successfully manage your streaming data operations, from visibility to monitoring, with Lenses, an overlay platform now generally available as part of the Azure HDInsight application ecosystem, right from within the Azure portal!
With Lenses, customers can now:
Easily look inside Kafka topics
Inspect and modify streaming data using SQL
Visualize application landscapes
Look inside Kafka topics
A typical production Kafka cluster has thousands of topics. Imagine you want to get a high level view on all of these topics. You may want to understand the configuration of the various topics, such as the replication or partition distribution. Or you may want to look deeper inside a specific topic, investigating the message throughput and the leader broker.
While many of these insights can be provided through the Kafka CLI, Lenses greatly simplifies the experience by unifying key insights for topics and brokers via a simple to use and intuitive visual interface. With Lenses, inspecting your Kafka cluster is effortless.
Inspect and modify streaming data using SQL
What if you want to inspect the data within the Kafka topic and view the messages sent within a certain time frame? Or if you actually want to process a subset of that stream and write it back to another Kafka topic. You can achieve that with SQL queries and Processors within the Lenses UI. You can write SQL queries to validate your streaming data and unblock your client organizations faster.
SQL Processors can be deployed and monitored to perform real-time transforms and analytics, supporting all the features you would expect in SQL, like joins and aggregations. You can also configure Lenses to scale out processing with Azure Kubernetes Service (AKS).
Visualize application landscapes
At the end of the day, you’re trying to create a solution that will create business impact. That solution will be composed of various microservices, data producers, and analytical engines. Lenses gives you easy insights into your application landscape, describing the running processes and the lineage of your data platform.
In the Topology view, running applications are dynamically added, recovered at startup, and the topics are included. For creating end-to-end solutions, Lenses also provides an easy way to deploy connectors from the open source Stream Reactor project, containing a large collection of Kafka Connect Connectors.
Check out the following resources to get started with Lenses on Azure HDInsight:
We continue to expand the Azure Marketplace ecosystem. For this volume, 163 new offers successfully met the onboarding criteria and went live. See details of the new offers below:
Applications
Accela Civic Platform and Civic Applications: Accela's fast-to-implement civic applications and robust and extensible solutions platform help agencies respond to the rapid modernization of technology with SaaS solutions that offer high degrees of security, flexibility, and usability.
Adrenalin HCM: Human resource function is the quintessential force that enables an organization’s strongest asset to perform better and benefit themselves and the company. Reimagine your HR function with Adrenalin HCM.
Advanced Threat Protection for OneDrive: BitDam helps enterprises take full advantage of all OneDrive has to offer while delivering advanced threat protection against content-borne attacks.
AGR - Advanced Demand Planning: This modular AGR solution allows you to make more consistent planning decisions and more accurate buying decisions and helps ensure you have the right product in the right place at the right time.
agroNET - Digital Farming Management Platform: agroNET is a turnkey digital farming solution that enables smart agriculture service providers and system integrators to rapidly deploy the service tailored to the needs of farmers.
AIMSCO Azure MES/QM Platform for SME Manufacturers: With embedded navigation dashboards, displays, alerts, APIs, and BI interfaces, AIMSCO Azure MES/QM Platform users from the shop floor to the boardroom have real-time access to critical decision-making tools.
AIRA Robotics as a Service: Transform the installation of new equipment from CAPEX to OPEX as a part of a digital transformation using the AIRA digitalization system for long-term service relationships with suppliers.
Apex Portal: Use Apex Portal for supplier registration, self-service inquiry of invoice and payment status, dynamic discounting and early payments, and automated statement audits.
AppStudio: AppStudio is a suite of offerings for managing apps using a standardized methodology to ensure you are up to date and ready for the next challenge.
ArcBlock ABT Blockchain Node: ABT Blockchain Node is fully decentralized and uses ArcBlock's blockchain development platform to easily build, run, and use DApps and blockchain-ready services.
ArcGIS Enterprise 10.7: Manage, map, analyze, and share geographic information systems (GIS) data with ArcGIS Enterprise, the complete geospatial system that powers your data-driven decisions.
Area 1 Horizon Anti-Phishing Service for Office 365: Area 1 Security closes the phishing gap with a preemptive, comprehensive, and accountable anti-phishing service that seamlessly integrates with and fortifies Microsoft Office 365 security defenses.
Arquivar-GED: ArqGED is document management software that allows users to dynamically solve problems with location and traceability of information in any format (paper, digital, microfilm, etc.).
Aruba Virtual Gateway (SD-WAN): Aruba's software-defined WAN (SD-WAN) technology simplifies wide area network operations and improves application QoS to lower your total cost of ownership.
Arundo Analytics: Arundo delivers enterprise-scale machine learning and advanced analytics applications to improve operations in heavy asset industries.
Assurity Suite: The Assurity Suite platform provides assurance and control over your organization's documents, communications, investigations, compliance, information, and processes.
Atilekt.NET: Website-building platform Atilekt.NET is a friendly, flexible, and fast-growing content management system based on ASP.NET.
Axians myOperations Patch Management: Axians myOperations Server Patch Management integrates a complete management solution to simplify the rollout, monitoring, and reporting of Windows updates.
Axioma Risk: Axioma Risk is an enterprise-wide risk-management system that enables clients to obtain timely, consistent, and comparable views of risk across an entire organization and all asset classes.
Azure Analytics System Solution: BrainPad's Azure Analytics System Solution is designed for enterprises using clouds for the first time as well as companies considering sophisticated usage. This application is available only in Japanese.
Beam Communications: Communications are a fundamental element in institutional development, and Beam Communications boosts internal and external communications. This application is available only in Spanish.
Betty Blocks Platform: From mobile apps to customer portals to back-office management and everything in between, the Betty Blocks platform supports every app size and complexity.
BI-Clinical: BI-Clinical is CitiusTech’s ONC- and NCQA-certified BI and analytics platform designed to address the healthcare organization’s most critical quality reporting and decision support needs.
Bizagi Digital Business Platform: The Bizagi platform helps enterprises embrace change by improving operational efficiencies, time to market, and compliance.
Bluefish Editor on Windows Server 2019: The Bluefish software editor supports a plethora of programming languages including HTML, XHTML, CSS, XML, PHP, C, C++, JavaScript, Java, Google Go, Vala, Ada, D, SQL, Perl, ColdFusion, JSP, Python, Ruby, and Shell.
BotCore - Enterprise Chatbot Builder: BotCore is an accelerator that enables organizations to build customized conversational bots powered by artificial intelligence. It is fully deployable to Microsoft Azure and leverages many of the features available in it.
Brackets: With focused visual tools and preprocessor support, Brackets is a modern text editor that makes it easy to design in the browser. It's crafted for web designers and front-end developers.
Brackets on Windows Server 2019: With focused visual tools and preprocessor support, Brackets is a modern text editor that makes it easy to design in the browser. It's crafted for web designers and front-end developers.
bugong: The bugong platform combines leading algorithm technology with intelligent manufacturing management. This application is available only in Chinese.
Busit Application Enablement Platform: Busit Application Enablement Platform (AEP) enables fast and efficient handling of all your devices and services, regardless of the brand, manufacturer, or communication protocol.
ByCAV: ByCAV provides biometric identity validation through non-traditional channels for companies in diverse industries that require identity verification. This application is available in Spanish only in Colombia.
Camel Straw: Camel Straw is a cloud-based load testing platform that helps teams load test and analyze and improve the way their applications scale.
Celo: Celo connects healthcare professionals. From big hospitals to small clinics, Celo helps healthcare professionals communicate better.
Cirkled In - College Recruitment Platform: Cirkled In is a revolutionary, award-winning recruitment platform that helps colleges match with best-fit high school students based on students’ holistic portfolio.
Cirkled In - Student Profile & Portfolio Platform: Cirkled In is a secure, award-winning electronic portfolio platform for students designed to compile students’ achievements in seven categories from academics to sports to volunteering and more.
Cleafy Fraud Manager for Azure: Cleafy combines deterministic malware detection with passive behavioral and transactional risk analysis to protect online services against targeted attacks from compromised endpoints without affecting your users and business.
Cloud Desktop: Cloud Desktops on Microsoft Azure offers continuity and integration with the tools and applications that you already use.
Cloud iQ - Cloud Management Portal: Crayon Cloud-iQ is a self-service platform that enables you to manage cloud products (Azure, Office 365, etc.), services, and economics across multiple vendors through a single pane portal view.
Cloudneeti - Continuous Assurance SaaS: Cloudneeti SaaS enables instant visibility into security, compliance, and data privacy posture and enforces industry standards through continuous and integrated assurance aligned with the cloud-native operating model.
Collaboro - Digital Asset Management: Collaboro partners with brands, institutions, government, and advertising agencies to solve their specific digital asset management needs in a fragmented marketing and media space.
Connected Drone: Targeting power and utilities, eSmart Systems Connected Drone software utilizes deep learning to dramatically reduce utility maintenance costs and failure rates and extend asset life.
CyberVadis: By pooling and sharing analyst-validated cybersecurity audits, CyberVadis allows you to scale up your third-party risk assessment program while controlling your costs.
Data Quality Management Platform: BaseCap Analytics’ Data Quality Management Platform helps you make better business decisions by measurably increasing the quality of your greatest asset: data.
DatabeatOMNI: DatabeatOMNI provides you with everything you need to display great content, on as many screens as you want to – without complex interfaces, specialist training, or additional procurement costs.
dataDiver: dataDiver is an extended analytics tool for gaining insights into research design that is neither traditional BI nor BA. This application is available only in Japanese.
dataFerry: dataFerry is a data preparation tool that allows you to easily process data from various sources into the desired form. This application is available only in Japanese.
Dataprius Cloud: Dataprius offers a different way to work with files in the cloud, allowing you to work with company files without synchronizing, without conflicts, and with multiple users connected at the same time.
Denodo Platform 7.0 14-day Free Trial (BYOL): Denodo integrates all of your Azure data sources and your SaaS applications to deliver a standards-based data gateway, making it quick and easy for users of all skill levels to access and use your cloud-hosted data.
Descartes MacroPoint: Descartes MacroPoint consolidates logistics tracking data from carriers into a single integrated platform to meet two growing challenges: real-time freight visibility and automated capacity matching.
Digital Asset Management (DAM) Managed Application: Digital Asset Management delivers a secured and centralized repository to manage videos. It offers capabilities for advanced embed, review, approval, publishing, and distribution of videos.
Digital Fingerprints: Digital Fingerprints is a continuous authentication system based on behavioral biometrics.
DM REVOLVE - Dynamics Data Migration: DM REVOLVE is a dedicated Azure-based Dynamics end-to-end data migration solution that incorporates "Dyn-O-Matic," our specialized Dynamics automated load adaptor.
Docker Community Edition Ubuntu Xenial: Deploy Docker Community Edition with Ubuntu on Azure with this community-supported, DIY version of Docker on Ubuntu.
Dom Rock AI for Business Platform: The Dom Rock AI for business platform empowers people to make better and faster decision enlightened by data. This application is available only in Portuguese.
Done.pro: Done.pro will enable Uber for X cloud platforms customized and tuned for your business in order to provide customers with exceptional service.
EDGE: The Edge system allows seamless operations across the UK – in both the established Scottish market and the new English market.
eJustice: The eJustice solution provides information and communication technology enablement for courts.
ekoNET - Air Quality Monitoring: ekoNET combines portable devices and cloud-based functionality to enable granular air quality monitoring indoors and outdoors.
Element AssetHub: AssetHub is a data hub connecting time series, IT, and OT to manage operational asset models.
Equinix Cloud Exchange Fabric: This software-defined interconnection solution allows you to directly, securely, and dynamically connect distributed infrastructure and digital ecosystems to your cloud service providers.
ERP Beam Education: ERP Beam Education efficiently integrates all the processes that are part of managing an educational center. This application is available only in Spanish.
Essatto Data Analytics Platform: Essatto enables more informed decision making by providing timely insights into your financial and business operations in a flexible, cost-effective application.
Event Monitor: Event Monitor is a user-friendly solution meant for security teams that are responsible for safety.
Firewall as a Service: Firewall as a Service delivers a next-generation managed internet gateway from Microsoft Azure including 24/7 support, self-service, and unlimited changes by our security engineers.
GEODI: GEODI helps you focus on your business by letting you share information, documents, notes, and notifications with contacts and stakeholders via mobile app or browser.
GeoServer: Make your spatial information accessible to all with this free, community-supported open source server based on Java for sharing geospatial data.
GeoServer on Windows Server 2019: Make your spatial information accessible to all with this free, community-supported open source server based on Java for sharing geospatial data.
Ghost Helm Chart: Ghost is a modern blog platform that makes publishing beautiful content to all platforms easy and fun. Built on Node.js, it comes with a simple markdown editor with preview, theming, and SEO built in.
Grafana Multi-Tier with Azure Managed DB: Grafana is an open source analytics and monitoring dashboard for over 40 data sources, including Graphite, Elasticsearch, Prometheus, MariaDB/MySQL, PostgreSQL, InfluxDB, OpenTSDB, and more.
HashiCorp Consul Helm Chart: HashiCorp Consul is a tool for discovering and configuring services in your infrastructure.
HPCBOX: HPC Cluster for STAR-CCM+: HPCBOX combines cloud infrastructure, applications, and managed services to bring supercomputer technology to your personal computer.
H-Scale: H-Scale is a modular, configurable, and scalable data integration platform that helps organizations build confidence in their data and accelerate their data strategies.
Integrated Cloud Suite: CitiusTech’s Integrated Cloud Suite is a one-stop solution that enables healthcare organizations to reduce complexity and drive a multi-cloud strategy optimally and cost-effectively.
JasperReports Helm Chart: JasperReports Server is a standalone and embeddable reporting server. It is a central information hub, with reporting and analytics that can be embedded into web and mobile applications.
Jenkins Helm Chart: Jenkins is a leading open source continuous integration and continuous delivery (CI/CD) server that enables the automation of building, testing, and shipping software projects.
Jenkins On Ubuntu Bionic Beaver: Jenkins is a simple, straightforward continuous integration tool that effortlessly distributes work across multiple devices and assists in building drives, tests, and deployment.
Jenkins-Docker CE on Ubuntu Bionic Beaver: This solution takes away the hassles of setting up the installation process of Jenkins and Docker. The ready-made image integrates Jenkins-Docker to make continuous integration jobs smooth, effective, and glitch-free.
Join2ship: Join2ship is a collaborative supply chain platform designed to digitalize your receipts and deliveries.
Kafka Helm Chart: Tested to work on the EKS platform, Apache Kafka is publish-subscribe messaging rethought as a distributed commit log.
Kaleido Enterprise Blockchain SaaS: Kaleido simplifies the process of creating and operating permissioned blockchains with a seamless experience across cloud properties and geographies for all network participants.
Kubeapps Helm Chart: Kubeapps is a web-based application deployment and management tool for Kubernetes clusters.
LOOGUE FAQ: LOOGUE FAQ is an AI virtual agent that creates chatbots that support queries by creating and uploading two columns of questions and answers in Excel. This application is available only in Japanese.
Magento Helm Chart: Magento is a powerful open source e-commerce platform. Its rich feature set includes loyalty programs, product categorization, shopper filtering, promotion rules, and much more.
MariaDB Helm Chart: MariaDB is an open source, community-developed SQL database server that is widely used around the world due to its enterprise features, flexibility, and collaboration with leading tech firms.
Metrics Server Helm Chart: Metrics Server aggregates resource usage data, such as container CPU and memory usage, in a Kubernetes cluster and makes it available via the Metrics API.
MNSpro Cloud Basic: MNSpro Cloud combines the management of your school network with a learning management system, whether you use Windows, iOS, or Android devices.
MongoDB Helm Chart: MongoDB is a scalable, high-performance, open source NoSQL database written in C++.
MySQL 5.6 Secured Ubuntu Container with Antivirus: MySQL is a popular open source relational SQL database management system. MySQL is one of the best RDBMS being used for developing web-based software applications.
MySQL 8.0 Secured Ubuntu Container with Antivirus: MySQL is a popular open source relational SQL database management system. MySQL is one of the best RDBMS being used for developing web-based software applications.
MySQL Helm Chart: MySQL is a fast, reliable, scalable, and easy-to-use open source relational database system. MySQL Server is designed to handle mission-critical, heavy-load production applications.
NATS Helm Chart: NATS is an open source, lightweight, and high-performance messaging system. It is ideal for distributed systems and supports modern cloud architectures and pub-sub, request-reply, and queuing models.
NetApp Cloud Volumes ONTAP: NetApp Cloud Volumes ONTAP, a leading enterprise-grade storage management solution, delivers secure, proven storage management services and supports up to a capacity of 368 TB.
Node.js Helm Chart: Node.js is a runtime environment built on V8 JavaScript engine. Its event-driven, non-blocking I/O model enables the development of fast, scalable, and data-intensive server applications.
Odoo Helm Chart: Odoo is an open source ERP and CRM platform that can connect a wide variety of business operations such as sales, supply chain, finance, and project management.
On-Demand Mobility Services Platform: Deploy this intelligent, on-demand transportation operating system for automotive OEMs that need to run professional mobility services to embrace the new automotive era and manage the decline of vehicle ownership.
OpenCart Helm Chart: OpenCart is free open source e-commerce platform for online merchants. OpenCart provides a professional and reliable foundation from which to build a successful online store.
OrangeHRM Helm Chart: OrangeHRM is a feature-rich, intuitive HR management system that offers a wealth of modules to suit the needs of any business. This widely used system provides an essential HR management platform.
Osclass Helm Chart: Osclass allows you to easily create a classifieds site without any technical knowledge. It provides support for presenting general ads or specialized ads and is customizable, extensible, and multilingual.
ownCloud Helm Chart: ownCloud is a file storage and sharing server that is hosted in your own cloud account. Access, update, and sync your photos, files, calendars, and contacts on any device, on a platform that you own.
Paladion MDR powered by AI Platform – AI.saac: Paladion's managed detection and response, powered by our next-generation AI platform, is a managed security service that provides threat intelligence, threat hunting, security monitoring, incident analysis, and incident response.
Parse Server Helm Chart: Parse is a platform that enables users to add a scalable and powerful back end to launch a full-featured app for iOS, Android, JavaScript, Windows, Unity, and more.
Phabricator Helm Chart: Phabricator is a collection of open source web applications that help software companies build better software.
PHP 5.6 Secured Jessie-cli Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.
PHP 5.6 Secured Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.
PHP 7.0 Secured Jessie Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.
PHP 7.0 Secured Jessie-cli Container - Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.
PHP 7.0 Secured Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.
PHP 7.1 Secured Jessie Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.
PHP 7.1 Secured Jessie-cli Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.
PHP 7.1 Secured Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.
PHP 7.2 Secured Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.
PHP 7.3 Rc Stretch Container with Antivirus: PHP is a server-side scripting language designed for web development. It is mainly used for server-side scripting and can collect form data, generate dynamic page content, and send and receive cookies.
phpBB Helm Chart: phpBB is a popular bulletin board that features robust messaging capabilities such as flat message structure, subforums, topic split/merge/lock, user groups, full-text search, and attachments.
PostgreSQL Helm Chart: PostgreSQL is an open source object-relational database known for reliability and data integrity. ACID-compliant, it supports foreign keys, joins, views, triggers, and stored procedures.
Project Ares: Project Ares by Circadence is an award-winning, gamified learning and assessment platform that helps cyber professionals of all levels build new skills and stay up to speed on the latest tactics.
Python Secured Jessie-slim Container - Antivirus: This image is made for customers who are looking for deploying a self-managed Community Edition on Hardened kernel instead of just putting up a vanilla install.
Quvo: Quvo is a cloud-first, mobile-first working platform designed especially for public sector and enterprise mobile workforces.
RabbitMQ Helm Chart: RabbitMQ is a messaging broker that gives your applications a common platform to send and receive messages, and your messages a safe place to live until received.
Recordia: Smart Recording & Archiving Interactions: Recordia facilitates gathering all valuable customer interactions under one single repository in the cloud. Know how your sales, marketing, and support staff is doing.
Redis Helm Chart: Redis is an open source, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets, and sorted sets.
Redmine Helm Chart: Redmine is a popular open source project management and issue tracking platform that covers multiple projects and subprojects, each with its own set of users and tools, from the same place.
Secured MySQL 5.7 on Ubuntu 16.04 LTS: MySQL is a popular open source relational SQL database management system. MySQL is one of the best RDBMS being used for developing web-based software applications.
Secured MySQL 5.7 on Ubuntu 18.04 LTS: MySQL is a popular open source relational SQL database management system. MySQL is one of the best RDBMS being used for developing web-based software applications.
Smart Planner: Smart Planner is a web platform for the optimization of productive processes, continuous improvement, and integral management of the supply chain. This application is available only in Spanish.
SmartVM API - Improve your vendor master file: The SmartVM API vendor master cleansing, enriching, and continuous monitoring technology automates vendor master management to help you mitigate risks, eliminate costly information gaps, and improve your supplier records.
SuiteCRM Helm Chart: SuiteCRM is an open source, enterprise-grade customer relationship management (CRM) application that is a fork of the popular SugarCRM application.
Talend Cloud: Remote Engine for Azure: Talend Cloud is a unified, comprehensive, and highly scalable integration Platform as-a-Service (iPaaS) that makes it easy to collect, govern, transform, and share data.
TensorFlow ResNet Helm Chart: TensorFlow ResNet is a client utility for use with TensorFlow Serving and ResNet models.
TestLink Helm Chart: TestLink is test management software that facilitates software quality assurance. It supports test cases, test suites, test plans, test projects and user management, and stats reporting.
Tomcat Helm Chart: Tomcat is a widely adopted open source Java application and web server. Created by the Apache Software Foundation, it is lightweight and agile with a large ecosystem of add-ons.
Transfer Center: The comprehensive patient analytics and real-time reporting in Transfer Center help ensure improved care coordination, streamlined patient flow, and full regulatory compliance.
Unity Cloud: Unity is underpinned by Docker, so you can write custom full-code extensions in any language and enjoy fault tolerance, high availability, and scalability.
User Management Pack 365: User Management Pack 365 is a powerful software application that simplifies user lifecycle and identity management across Skype for Business deployments.
Webfopag – Online Payroll: Fully process payroll while meeting your business compliance rules. This application is available only in Portuguese.
WordPress Helm Chart: WordPress is one of the world's most popular blogging and content management platform. It is powerful yet simple, and everyone from students to global corporations use it to build beautiful, functional websites.
XAMPP: XAMPP is specifically designed to make it easier for developers to install the distribution to get into the Apache universe.
XAMPP Windows Server 2019: XAMPP is specifically designed to make it easier for developers to install the distribution to get into the Apache universe.
ZooKeeper Helm Chart: ZooKeeper provides a reliable, centralized register of configuration data and services for distributed applications.
Consulting Services
360 Degree Security System: 1-Hour Briefing: This 360 Degree Security System briefing will address why antivirus solutions are obsolete, how to automatically track and block brute force attacks, and how to automatically track and block malicious activity.
Application Migration: 3-Day Assessment: Chef consultants will attend your site and assess how to use Chef Habitat to migrate a legacy app from an older platform (such as Windows Server 2008 R2 and SQL Server 2008 R2) to Azure.
Archiving & Backup Essentials: 1-Hr Briefing: Learn how to take advantage of tiered storage in Microsoft Azure to dramatically reduce your storage and backup costs and enhance your resilience.
Azure Cloud Governance 1-Day Workshop: Join this day-long cloud governance learning event designed for IT and senior leadership. Discover cloud governance, understand the main concepts, and learn about what you can do to give your business an advantage.
Azure Data Centre Modernization: 3-Day Assessment: This Azure assessment will provide you with an understanding of what's possible for your business with a business case for migration that includes timing and cost estimates.
Azure Maturity: 4-Week Assessment: The Azure Maturity assessment aims at estimating the maturity of your organization (strengths and weaknesses) and building a roadmap that will allow you to make your cloud journey a success.
Azure: 5-Day Enterprise Scaffold Workshop: This workshop provides training, processes, and security settings to scale up and optimize the adoption of Azure by removing blockers to scale and introducing processes to scale safely and efficiently.
BizTalk to Azure Migration Assessment - 2 Day: This assessment will provide you with detailed guidance on how you can successfully move your BizTalk applications to Azure Integration Services running in the cloud.
Business Continuity System: 1 Hour Briefing: This briefing is for every IT director who wants to minimize downtime with dependable recovery, reduce infrastructure costs, or easily run disaster recovery drills without affecting ongoing replication.
Data Centre Migration Essentials: 1-Hr Briefing: Identify your migration options and uncover the best ROI opportunities in migrating your apps, data, and/or infrastructure to Microsoft Azure.
Data Compliance Monitoring - 3 Week Assessment: The CTO Boost team will work closely with your risk and compliance stakeholders to assess your compliance strategy and build a plan toward compliance automation.
Databricks 5 Day Data Engineering PoC: We will work with your development team to demonstrate the performance, scale, and reduced complexity that Azure Databricks can offer your business.
Email Compliance Essentials: 1-Hr Briefing: Discover how you can use Azure to provide email journaling, retention management, and e-discovery to meet your email compliance needs.
Legacy App Migration – 8-Week Assessment and Design: After investigating your legacy apps, we deliver a roadmap for your Azure cloud journey. Additionally, we design a modern user experience (UX) leveraging the latest usability and distributed workforce techniques.
Modern Data Architecture: 1-Hour Assessment: During this session we will discuss the different components that make up a modern data architecture to assess whether it is right for you and how Data Thirst could help you deliver a successful data platform that uses it.
Win/SQL 2008 EOL to Azure: 5-Day Assessment: This free assessment is focused on applications running on end-of-support Windows and SQL Server 2008 products and provides a detailed upgrade and migration plan to Microsoft Azure.
Windows/SQL 2008 to Azure: 1 Week Implementation: Need an efficient path forward for applications based on Windows or SQL Server 2008? This 1-week implementation provides a data-driven migration of your Windows or SQL workload to Microsoft Azure.
We just released a new capability that enables enriching messages that are egressed from Azure IoT Hub to other services. Azure IoT Hub provides an out-of-the-box capability to automatically deliver messages to different services and is built to handle billions of messages from your IoT devices. Messages carry important information that enable various workflows throughout the IoT solution. Message enrichments simplifies post-processing of your data and can reduce costs of calling device twin APIs for information. This capability allows you to stamp information on your messages, such as details from your device twin, your IoT Hub name or any static property you want to add.
A message enrichment has three key elements, the key name for the enrichment, the value of the enrichment key, and the endpoints that the enrichment applies to. Message enrichments are added to the IoT Hub message as application properties. You can add up to 10 enrichments per IoT Hub for standard and basic tier IoT Hubs and two enrichments for free tier IoT Hub. Enrichments can be applied to messages going to the built-in endpoint, messages that are routed to the built-in endpoint, or custom endpoints such as Azure blob storage, Event Hubs, Service Bus Queue, and Service Bus topic. Each enrichment will have a key that can be set as any string, and a value that can be a path to a device twin (e.g. $twin.tag.field), the IoT Hub sending the message (e.g. $iothubname), or any static value (e.g. myapplicationId).
You can also use the IoT Hub Create or Update REST API, and add enrichments as part of the RoutingProperties. For example:
This feature is available for preview in all public regions except East US, West US, and West Europe. We are excited for you to try this capability and build more streamlined IoT solutions for your business. Try this tutorial to get started.
We would love to hear more about your experiences with the preview and get your feedback! Are there other capabilities in IoT Hub that you would like to see? Please continue to submit your suggestions through the Azure IoT User Voice forum.
This post was co-authored by Andy Randall, VP of Business Development, Kinvolk Gmbh
We are pleased to share the availability of Calico Network Policies in Azure Kubernetes Service (AKS). Calico policies lets you define filtering rules to control flow of traffic to and from Kubernetes pods. In this blog post, we will explore in more technical detail the engineering work that went into enabling Azure Kubernetes Service to work with a combination of Azure CNI for networking and Calico for network policy.
First, some background. Simplifying somewhat, there are three parts to container networking:
Allocating an IP address to each container as it’s created, this is IP address management or IPAM.
Routing the packets between container endpoints, which in turn splits into:
Routing from host to host (inter-node routing).
Routing within the host between the external network interface and the container, as well as routing between containers on the same host (intra-node routing).
Ensuring that packets that should not be allowed are blocked (network policy).
Typically, a single network plug-in technology addresses all these aspects. However, the open API used by Kubernetes Container Network Interface (CNI), actually allows you to combine different implementations.
The choice of configurations brings you opportunities, but also calls for a plan to make sure that the mechanisms you choose are compatible and enable you to achieve your networking goals. Let’s look a bit more closely into those details
Networking: Azure CNI
Cloud networks, like Azure, were originally built for virtual machines with typically just one or a small number of relatively static IP addresses. Containers change all that, and introduce a host of new challenges for the cloud networking layer, as dozens or even hundreds of workloads are rapidly created and destroyed on a regular basis, each of which is its own IP endpoint on the underlying network.
The first approach at enabling container networking in the cloud leveraged overlays, like VXLAN, to ensure only the host IP was exposed to the underlying network. Such overlay network solutions like flannel, or AKS’s kubenet (basic) networking mode, do a great job of hiding the underlying network from the containers. Unfortunately that is also the downside, the containers are not actually running in the underlying VNET, meaning they cannot be addressed like a regular endpoint and can only communicate outside of the cluster via network address translation (NAT).
With Azure CNI, which is enabled with advanced mode networking in AKS, we added the ability for each container to get its own real IP address within the same VNET as the host. When a container is created, the Azure CNI IPAM component assigns it an IP address from the VNET, and ensures that the address is configured on the underlying network through the magic of the Azure software-defined network layer, taking care of the inter-node routing piece.
So with IPAM and inter-node routing taken care of, we now need to consider intra-node routing. How do we do intra-node routing, i.e. get a packet between two containers, or between the host’s network interface (typically eth0) and the virtual ethernet (veth) interface of the container?
It turns out the Linux kernel is rich in networking capabilities, and there are many different ways to achieve this goal. One of the simplest and easiest is with a virtual bridge device. With this approach, all the containers are connected on a local layer two segment, just like physical machines that are connected via an ethernet switch.
Packets from the ‘real’ network are switched through the bridge to the appropriate container via standard layer two techniques (ARP and address learning).
Packets to the real network are passed through the bridge, to the NIC, where they are routed to the remote node.
Packets from one container to another also flow through the bridge, just like two PCs connected on an ethernet switch.
This approach, which is illustrated in figure one, has the advantage of being high performance and requiring little control plane logic to maintain, helping to ensure robustness.
Figure 1: Azure CNI networking
Network policy with Azure
Kubernetes has a rich policy model for defining which containers are allowed to talk to which other ones, as defined in the Kubernetes Network Policy API. As we demonstrated recently at Ignite, we have now implemented this API and it works in conjunction with Azure CNI in AKS or in your own self-managed Kubernetes clusters in Azure, with or without AKS-Engine.
We translate the Kubernetes network policy model to a set of allowed IP address pairs, which are then programmed as rules in the Linux kernel iptables module. These rules are applied to all packets going through the bridge. This is shown in figure two.
Figure 2: Azure CNI with Azure Policy Manager
Network policy with Calico
Kubernetes is also an open ecosystem, and Tigera’s Calico is well known as the first, and most widely deployed, implementation of Network Policy across cloud and on-premise environments. In addition to the base Kubernetes API, it also has a powerful extended policy model which supports a range of features such as global network policies, network sets, more flexible rule specification, the ability to run the policy enforcement agent on non-Kubernetes nodes, and application layer policy via integration with Istio. Furthermore, Tigera offers a commercial offering built on Calico, Tigera Secure, that adds a host of enterprise management, controls, and compliance features.
Given Kubernetes’ aforementioned modular networking model, you might think you could just deploy Calico for network policy along with Azure CNI, and it should all just work. Unfortunately, it is not this simple.
While Calico uses iptables for policy, it does so in a subtly different way. It expects containers to be established with separate kernel routes, and it enforces the policies that apply to each container on that specific container’s virtual ethernet interface. This has the advantage that all container-to-container communications are identical (always a layer 3 routed hop, whether internal to the host or across the underlying network), and security policies are more narrowly applied to the specific container’s context.
To make Azure CNI compatible with the way Calico works we added a new intra-node routing capability to the CNI, ,which we call ‘transparent’ mode. When configured to run in this mode, Azure CNI sets up local routes for containers instead of creating a virtual bridge device. This is shown in Figure 3.
Figure 3: Azure CNI with Calico Network Policy
Onward and upstream
A Kubernetes cluster with the enhanced Azure CNI and Calico policies can be created using AKS-Engine by specifying the following configuration in the cluster definition file.
These options have also been integrated into AKS itself, enabling you to provision a cluster with Azure networking and Calico network policy by simply specifying the options--network-plugin azure --network-policy Calico at cluster create time.
Each quarter, the Azure Sphere team works to open new scenarios to customers through new features on-chip and in the cloud. The Azure Sphere 19.05 release continues this theme by unlocking the real-time capable cores that reside on the MT3620. Co-locating these cores within the same SOC enables new, real-time scenarios on the M4 cores while continuing to support connectivity scenarios on the high-level core. This release also introduces support for DHCP-based Ethernet connections to the cloud.
We are also pleased to announce that the Azure Sphere hardware ecosystem continues to expand with new modules available for mass production and new, less expensive development boards. Finally, new Azure Sphere reference solutions are available to accelerate your solution’s time to market.
To build applications that take advantage of this new functionality, please download and install the latest Azure Sphere SDK Preview for Visual Studio. All Wi-Fi connected devices will automatically receive an updated Azure Sphere operating system that contain support for these new features.
Enabling new MT3620-based features
Real-time core preview—The OS and SDK support development, deployment, and debugging SPI, I2C, GPIO, UART and ADC real-time capable apps on the MT3620’s two M4 cores. GitHub sample apps show GPIO, UART, and real-time core to high-level core communication.
ADC sample—This real-time core sample app demonstrates how to use the MT3620’s analog-to-digital converters to sample voltages. See the ADC GitHub sample for more details.
Tools and libraries
Improved CMAKE support—Visual Studio now supports one-touch deploy and debug for applications that use CMake.
Application runtime version—Application properties specify the required application runtime version (ARV), and azsphere commands detect conflicts. See the online documentation for details.
Random number generation (RNG)—The POSIX base API supports random number generation from Pluton's RNG.
Easy hardware targeting—Hardware-specific JSON and header files are provided in the GitHub sample apps repository. You can now easily target a particular hardware product by changing an application property.
New connectivity options
Ethernet internet interface—This release supports an Ethernet connection as an alternative to a Wi-Fi connection for communicating with the Azure Sphere Security Service and your own services. Our GitHub samples now demonstrate how to wire the supported Microchip part, bring up the Ethernet interface, and use it to connect to Azure IoT or your own web services.
Local device discovery—The Azure Sphere OS offers new network firewall and multicast capabilities that enable apps to run mDNS and DNS-SD for device discovery on local networks. Look for more documentation in the coming weeks on this feature.
Support for additional hardware platforms
Several hardware ecosystem partners have recently announced new Azure Sphere-enabled products:
SEEED MT3620 Mini Development Board—This less-expensive development board single-band Wi-Fi is designed for size-constrained prototypes. It uses the AI-Link module for a quick path from prototype to commercialization.
USI Azure Sphere Combo Module—This module supports both dual-band Wi-Fi and Bluetooth. The on-board Bluetooth chipset supports BLE and Bluetooth 5 Mesh. The chipset can also work as an NFC tag to support non-contact Bluetooth pairing and device provisioning scenarios.
Avnet Guardian module—This module enables the secure connection of existing equipment to the internet. It attaches to the equipment through Ethernet and connects to the cloud via dual-band Wi-Fi.
Avnet MT3620 Starter Kit—This development board with dual-band Wi-Fi connectivity features modular connectors that support a range of MikroE Click and Grove modules.
Avnet Wi-Fi Module—This dual-band Wi-Fi module with stamp hole (castellated) pin design allows for easy assembly and simpler quality assurance.
There has never been a better time to begin developing on Azure Sphere, using the development kit or module which best fits your needs, or those of your customer, with highly customizable offerings available.
Whether you're a new student, thriving startup, or the largest enterprise you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management comes in.
We're always looking for ways to learn more about your challenges and how Cost Management can help you better understand how and where you're accruing costs in the cloud, identify and prevent bad spending patterns, and optimize costs to empower you to do more with less. Here are a few of the latest improvements and updates based on your feedback:
Expanded general availability (GA): Pay-as-you-go and Azure Government
Azure Cost Management is now generally available for the following account types:
Public cloud
Enterprise Agreements (EA)
Microsoft Customer Agreements (MCA)
Pay-as-you-go (PAYG) and dev/test subscriptions
Azure Government
Enterprise Agreements
Stay tuned for more information about preview support for additional account types and clouds, like Cloud Solution Providers (CSP) and Sponsorship subscriptions. We know how critical it is for you to have a rich set of cost management tools for every account across every cloud, and we hear you loud and clear.
New preview: Manage AWS and Azure costs together in the Azure portal
Many organizations are adopting multi-cloud strategies for additional flexibility, but with increased flexibility comes increased complexity. From different cost models and billing cycles to underlying cloud architectures, having a single cross-cloud cost management solution is no longer a luxury, but a fundamental requirement to efficiently and effectively monitor, control, and optimize costs. This is where Azure Cost Management can help.
Start by creating a new AWS cloud connector from the Azure portal. From the home page of the Azure portal select the Cost Management tile. Then, select Cloud connectors (preview) and click the "Add" command. Simply specify a name, pick the management group you want AWS costs to be rolled up to, and configure the AWS connection details.
Cost Management will start ingesting AWS costs as soon as the AWS cost and usage report is available. If you created a new cost and usage report, AWS may take up to 24 hours to start exporting data. You can check the latest status from the cloud connectors list.
Once available, open cost analysis and change the scope to the management group you selected when creating the connector. Group by provider to see a break down of AWS and Azure costs. If you connected multiple AWS accounts or have multiple Azure billing accounts, group by billing account to see a break down by account.
In addition to seeing AWS and Azure costs together, you can also change the scope to your AWS consolidated or linked accounts to drill into AWS costs specifically. Create budgets for your AWS scopes to get notified as costs hit important thresholds.
Managing AWS costs is free to use and you will not be charged during the preview. If you would like to automatically upgrade when AWS support is generally available, navigate to the connector, and select the Automatically charge the 1 percent at general availability option, then select the desired subscription to charge.
Learning a new service can take time. Reading through documentation is great, but you've told us that sometimes you just want a quick video to get you started. Well, here are eight:
Monitor costs based on your pay-as-you-go billing period
As you know, your pay-as-you-go and dev/test subscriptions are billed based on the day you signed up for Azure. They don’t map to calendar months, like EA and MCA billing accounts. This has made reporting on and controlling costs for each bill a little harder, but now you have the tools you need to effectively manage costs based on your specific billing cycle.
When you open cost analysis for a PAYG subscription, it defaults to the current billing period. From there, you can switch to a previous billing period or select multiple billing periods. More on the extended date picker options later.
If you want to get notified before your bill hits a specific amount, create a budget for the billing month. You can also specify if you want to track a quarterly or yearly budget by billing period.
Sometimes you need to export data and integrate it with your own datasets. Cost Management offers the ability to automatically push data to a storage account on a daily, weekly, or monthly basis. Now you can export your data as it is aligned to the billing period, instead of the calendar month.
We love hearing your suggestions, so let us know if there's anything else that would help you better manage costs during your personalized billing period.
More comprehensive scheduled exports
Scheduled exports enable you to react to new data being pushed to you instead of periodically polling for updates. As an example, a daily export of month-to-date data will push a new CSV file every day from January 1-31. These daily month-to-date exports have been updated to continue to push data on the configured schedule until they include the full dataset for the period. For example, the same daily month-to-date export would continue to push new January data on February first and February second to account for any data which may have been delayed. The update guarantees you will receive a full export for every period, starting April 2019.
You've told us that analyzing cost trends and investigating spending anomalies sometimes requires a broad set of date ranges. You may want to look at the current billing period to keep an eye on your next bill or maybe you need to look at the last 30 days in a monthly status meeting. Some teams are even looking at the last 7 days on a weekly or even daily basis to identify spending anomalies and react as quickly as possible. Not to mention the need for longer-term trend analysis and fiscal planning.
Based on all the great feedback you've shared around needing a rich set of one-click date options, cost analysis now offers an extended date picker with more options to make it easier than ever for you to get the data you need quickly.
We also noticed trends in how you navigate between periods. To simplify this, you can now quickly navigate backward and forward in time using the < PREVIOUS and NEXT > links at the top of the date picker. Try it yourself and let us know what you think.
Share links to customized views
We've heard you loud and clear about how important it is to save and share customized views in cost analysis. You already know you can pin a customized view to the Azure portal dashboard, and you already know you can share dashboards with others. Now you can share a direct link to that same customized view. If somebody who doesn't have access to the scope opens the link they'll get an access denied message, but they can change the scope to keep the customizations and apply them to their own scope.
You can also customize the scope to share a targeted URL. Here's the format of the URL:
The domain is optional. If you remove that, the user's preferred domain will be used.
The scope is also optional. If you remove that, the user's default scope will be the first billing account, management group, or subscription found. If you specify a custom scope, remember to URL-encode (e.g. "/" → "%2F") the scope, otherwise cost analysis will not load correctly.
The view configuration is a gzipped, URL-encoded JSON object. As an example, here's how you can decode a customized view:
Want to keep an eye on all documentation updates? Check out the Cost Management doc change history in the azure-docs repository on GitHub. If you see something missing, select "Edit" at the top of the doc and submit a quick pull request.
What's next?
These are just a few of the big updates from the last month. We're always listening and making constant improvements based on your feedback, so please keep the feedback coming.
Azure Deployment Manager is a new set of features for Azure Resource Manager that greatly expands your deployment capabilities. If you have a complex service that needs to be deployed to several regions, if you’d like greater control over when your resources are deployed in relation to one another, or if you’d like to limit your customer’s exposure to bad updates by catching them while in progress, then Deployment Manager is for you. Deployment Manager allows you to perform staged rollouts of resources, meaning they are deployed region by region in an ordered fashion.
During Microsoft Build 2019, we announced that Deployment Manager now supports integrated health checks. This means that as your rollout proceeds, Deployment Manager will integrate with your existing service health monitor, and if during deployment unacceptable health signals are reported from your service, the deployment will automatically stop and allow you to troubleshoot.
In order to make health integration as easy as possible, we’ve been working with some of the top service health monitoring companies to provide you with a simple copy/paste solution to integrate health checks with your deployments. If you’re not already using a health monitor, these are great solutions to start with:
These service monitors provide a simple copy/paste solution to integrate with Azure Deployment Manager’s health integrated rollout feature, allowing you to easily prevent bad updates from having far reaching impact across your user base. Stay tuned for Azure Monitor integration, which is coming soon.
Additionally, Azure Deployment Manager no longer requires sign-up for use, and is now completely open to the public!
Performance has been a big focus area for Visual Studio 2019, with improvements in many areas, including:
Faster Visual Studio startup
Faster branch switching experience in Visual Studio
C++ open folder – time to IntelliSense improvements
Faster C++ compiler build times
Faster debug stepping
Debug extra large C++ codebases
Faster installation updates
Faster and clean startup
Something you’ll notice when you open Visual Studio 2019 is its new start window. The new start window is much faster than Visual Studio 2017’s start window and has been designed to present you with several options to get you to code quickly. In addition, starting with Visual Studio 2019 version 16.1, Visual Studio blocks synchronously autoloaded extensions to improve startup and solution load times. This allows you to get to your code faster.
Faster branch switching experience
When working with Git, part of the usual workflow is to create and work on code branches. The branch switching experience has been completely redesigned over the last 6 months. Starting with Visual Studio 2017 update 15.8, the IDE does not completely unload and reload the solution during branch switches (unless large number of projects are updated as part of the branch switching operation).
To avoid context switching between the IDE and the Git command line, Visual Studio 2019 now provides an integrated branch switching experience that allows you to “stash” any uncommitted changes during the branch switch operation. You no longer need to go outside of the IDE to stash your changes before switching branches in Visual Studio.
Faster debugger stepping
Since large part of the development cycle includes stepping through and debugging code, we have worked to bring several improvements to the debugger performance. Stepping through your code is over 50% faster in Visual Studio 2019 versus 2017. The Watch, Autos, and Locals windows are 70% faster. Moreover, since most debugger-related windows (i.e. watch window, call stack window, etc.) are now asynchronous, you can now interact with one window in Visual Studio while waiting for information to load in another.
Debug very large C++ codebases
Visual Studio 2019 introduces an improved debugger for C++ that uses an external 64-bit process for hosting its memory-intensive components. If you’ve experienced memory-related issues while debugging large C++ applications before, these issues should now be resolved with Visual Studio 2019. You can read how the new external debug process has addressed the current issues in our Gears of War case study.
Indexing and IntelliSense performance in C++ CMake Projects
The indexing is now significantly faster for code opened via Open folder, and, as a result, IntelliSense is available considerably faster, when compared to Visual Studio 2017. As an example, in the LLVM codebase, IntelliSense becomes available 2 times faster in Visual Studio 2019. Additionally, a new indexing algorithm lights up IntelliSense incrementally while the folder is being indexed, so you don’t need to wait for the entire folder to be indexed before you can be productive with your code.
Faster C++ Build Linker time is 2x faster
C++ builds have been made faster with improvements in the C++ linker. For example, we see over 2x faster build linker times for an Unreal Engine-based AAA game.
Faster installation of Visual Studio updates
With the introduction of background downloads for updates in 16.0, you can continue working on your code for a longer time, while the update downloads in the background. At the end of the download, once the update is ready for installation, you will get a notification to let you know that you’re good to go. Using this approach, the update installation time for Visual Studio 2019 updates have decreased significantly.
Try Visual Studio 2019 and let us know
We welcome you to try Visual Studio 2019 either with your own projects or with Roslyn Compilers projects we used as examples above and see how it compares to Visual Studio 2017 for your scenarios. We are always looking for more feedback to know which improvements are working for you, which ones are not and which areas we should focus on next.
If you are seeing performance issues, please send us feedback through Visual Studio’s Report a Problem tool that you can access from Help -> Send Feedback -> Report a problem. This tool will ensure that we have the right set of data to help us analyze the issue.
On June 15th 2019, our amazing community, passionate about DevOps on the Microsoft stack, are coming together for the 3rd Global DevOps Bootcamp. Every year, the community organisers set a challenging yet inspiring theme on building modern apps using continuous delivery, and this year is no exception, where you’ll be taking your skills to the next level by leveraging DevOps principals – You build it, you run it! This year’s the theme focuses on 3 aspects of rugged DevOps:
Detection: Get insights into how your system behaves to make you and your users aware of any anomalies.
Response: Connect with your users to make sure users are aware of the issues.
Recovery: Action remediation to get your system back into operation.
The bootcamps will kick-off with an exclusive keynote from Niall Murphy (Director of Engineering for Azure Cloud Services and Site Reliability Engineering), followed by a local keynote which will go deeper on the challenges you’ll face during the day, and then get stuck in to the challenges in the hackathon.
You’ll also learn about the latest trends and people will be sharing their DevOps experiences. It’s a great opportunity to learn and network with others locally working in this space, and also get to hear all about some real-world DevOps experiences. So, join your fellow community members and attend one of the many events near you.
I’m on vacation this week, so I’ve been eating fatty foods and drinking tasty beverages instead of looking after the fine features of Azure DevOps. But, of course, the DevOps community isn’t on vacation, so here’s some of the great news and stories that they’ve been working on this week.
Containerised CI/CD pipelines with Azure DevOps
If you’re a regular reader of this top stories roundup, you’ll know that I can’t stop talking about containerizing your CI/CD process. Brent Robinson has another great post about simplifying your builds and supporting complex architectures with containers.
TFVC to Git – Things to Consider
Centralized version control makes a lot of sense for a lot of workflows and repositories. But if you’re only using it because of momentum, then you want to move into a distributed workflow, Chris Ayers has some good tips to get you ready for your move.
Azure DevOps Rest Api. 18. Create and Clone Build Definitions
One of the great things about checking in your build definition as YAML is that you don’t need a build definition per branch; so it’s less common that you’ll need to clone them. But when you do, it’s helpful to be able to do that programmatically. Shamrai Alexander shows you how to use the REST API to clone build definitions.
Troubleshoot YAML Build first run
The new YAML build experience simplifies a lot of the setup for your CI/CD pipeline. But when you have a complex configuration, it can be a little tricky to change the configuration. Gian Maria Ricci explains how to debug a non-standard setup.
As always, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure then let me know! I’m @ethomson on Twitter.
Azure DevOps is currently investing in enhancing its routing structure. This change is designed to increase service availability and decrease service latency for many users. As a result of this enhancement, our IP address space will be changing. If you’re currently using firewall rules to allow traffic to Azure DevOps, please be sure to update these rules to account for our new IP ranges. These IP address changes go into full effect 6/28/2019.
Determining impact (coming soon)
To help you determine whether this change impacts your organization, we are building an Azure DevOps IP check page. When you navigate to the page, we’ll run a sample request against our new routing structure. If the request fails you’ll get a red “X” in the response. To resolve, you’ll need to update your IP address whitelist. This feature currently isn’t implemented, however it is expected to be in place within the upcoming week.
IP address whitelist changes
To react to the changes in our IP address space, users should ensure dev.azure.com is open and update their whitelisted IPs to include the following IP addresses (based on your IP version). If you are currently whitelisting the 13.107.6.183 and 13.107.9.183 IP addresses, please leave these in place. You do not need to remove them.
Over the course of the next few weeks, we will conduct a series of tests to identify organizations that may be impacted by these routing changes. We will conduct our first test June 14th at 9 AM EST. This test will last for 1 hour. We will conduct our second test June 21st during the hours of 8AM – 11AM EST. This test will last for 4 hours. If you are unable to access your organization during this period of time, please navigate to the status page and validate we are testing our new routing structure. In the event we are running these tests and you’re unable to access your Azure DevOps organization, please update your IP address whitelist.
Reporting Issues
If you experience any issues with accessing your Azure DevOps organization after updating your IP whitelist, please post an update on this open developer community item.
Azure Virtual Machine HB-series are the first on the public cloud to scale an MPI-based high performance computing (HPC) job to 10,000 cores. This level of scaling has long been considered the realm of only the world’s most powerful and exclusive supercomputers, but is now available to anyone using Azure. HB-series virtual machines (VMs) are optimized for HPC applications requiring high memory bandwidth. For this class of workload, HB-series VMs are the most performant, scalable, and price-performant ever launched on Azure or elsewhere on the public cloud.
Event-driven architectures are increasingly replacing and outpacing less dynamic polling-based systems, bringing the benefits of serverless computing to IoT scenarios, data processing tasks, or infrastructure automation jobs. As the natural evolution of microservices, companies all over the world are taking an event-driven approach to create new experiences in existing applications or bring those applications to the cloud, building more powerful and complex scenarios every day. Today, we’re incredibly excited to announce a series of updates to Event Grid that will power higher performance and more advanced event-driven applications in the cloud.
We're excited to announce the general availability (GA) of Azure NetApp Files, the industry’s first bare-metal cloud file storage and data management service. Azure NetApp Files is an Azure first-party service for migrating and running the most demanding enterprise file-workloads in the cloud including databases, SAP, and high-performance computing applications with no code changes. This milestone is the result of deep investment by both companies to provide a great experience for our customers through a service that’s unique in the industry.
We just released a new capability that enables enriching messages that are egressed from Azure IoT Hub to other services. Azure IoT Hub provides an out-of-the-box capability to automatically deliver messages to different services and is built to handle billions of messages from your IoT devices. Messages carry important information that enable various workflows throughout the IoT solution. Message enrichment simplifies the post-processing of your data and can reduce the costs of calling device twin APIs for information. This capability allows you to stamp information on your messages, such as details from your device twin, your IoT Hub name, or any static property you want to add.
We’re excited to announce the public preview of Azure App Configuration, a new service aimed at simplifying the management of application configuration and feature flighting for developers and IT. App Configuration provides a centralized place in Microsoft Azure for users to store all their application settings and feature flags (also known as, feature toggles), control their accesses, and deliver the configuration data where it's needed.
It’s common for enterprises to run workloads on more than one cloud provider. However, adopting a multi-cloud strategy comes with complexities like handling different cost models, varying billing cycles, and different cloud designs that can be difficult to navigate across multiple dashboards and views. We’ve heard from many of you that you need a central cost management solution built to help you manage your spend across multiple cloud providers, prevent budget overruns, maintain control, and create accountability with your consumers. Azure Cost Management now offers cross-cloud support. This is available in preview and can play a critical role in helping you efficiently and effectively managing your organization’s multi-cloud needs.
Innovation at scale is a common challenge facing large organizations. A key contributor to the challenge is the complexity in coordinating the sheer number of apps and environments. Integration tools, such as Azure Logic Apps, give you the flexibility to scale and innovate as fast as you want, on-premises or in the cloud. This is a key capability you need to have in place when migrating to the cloud, or even if you're cloud native. Oftentimes integration has been relegated as something to do after the fact. In the modern enterprise, however, application integration is something that has to be done in conjunction with application development and innovation.
Migrating to a Microsoft Azure SQL Database managed instance provides a host of operational and financial benefits you can only get from a fully managed and intelligent cloud database service. Some of these benefits come from features that optimize or improve overall database performance. After migration, many of our customers are eager to compare workload performance with what they experienced with on-premises SQL Server, and sometimes they're surprised by the results. This article will help you understand the underlying factors that can cause performance differences and the steps you can take to make fair comparisons between SQL Server and SQL Database.
Serverless is a word that marketing teams around the world love to associate with cloud-based offerings, but what does it really mean? What’s the difference between fully managed offerings and true “serverless?” Are there really no servers involved? Should you migrate existing application services to serverless? How do you decide what new projects should incorporate serverless? This video explains.
Matias Quaranta joins Scott Hanselman to share some best practices for creating serverless geo-distributed applications with Azure Cosmos DB. With the native integration between Azure Cosmos DB and Azure Functions, you can create database triggers, input bindings, and output bindings directly from your Azure Cosmos DB account. Using Azure Functions and Azure Cosmos DB, you can create and deploy event-driven serverless apps with low-latency access to rich data for a global user base.
There are many benefits that .NET Core can bring to desktop applications. With .NET Core 3.0, support is being adding for building desktop application with WinForms and Windows Presentation Foundation (WPF). In this episode, Jeremy is joined by Merrie McGaw and Dustin Campbell who share some interesting insights on the work that's going into getting the WinForms designer ready for .NET Core 3.
Sensoria is an Azure IoT partner whose vision is The Garment is The Computer®. Sensoria's proprietary sensor-infused smart garments, Sensoria® Core microelectronics and cloud system enable smart garments to convert data into actionable information for users in real-time. Davide Vigano shares the vision and the product on the IoT Show and how they partner with Azure IoT.
Xamarin.Essentials provides developers with cross-platform APIs for their mobile applications. On this week's Xamarin.Essential API of the week we take a look at the Version Tracking API.
In this episode, Robert is joined by Kendra Havens. Every version of Visual Studio introduces new productivity features. If you want to see some of the ones introduced in Visual Studio 2019, check out Kendra's video “Write beautiful code, faster.” But what about the ones that have been in Visual Studio for a while that you may have missed? To see some of those, watch this video.
Lars sits down with Tiago Costa, Cloud Architect and Advisor, as he breaks down Microsoft’s newly launched role-based certifications, from the MVP Global Summit. We get some insight into the "why” behind the certification change, and some bonus exam tips from this Azure MVP and Microsoft Certified Trainer.
Kendall and Cynthia talk with Sujay Talasila and Won Huh on how to think about disaster recovery, differences that need to be considered between disaster recovery and backups, and recommended practices that users should consider.
Apache Kafka is one of the most popular open source streaming platforms today. However, deploying and running Kafka remains a challenge for most. Azure HDInsight addresses this challenge by providing a range of improvements. This blog describes them, and also shows how you can now successfully manage your streaming data operations, from visibility to monitoring, with Lenses, an overlay platform now generally available as part of the Azure HDInsight application ecosystem, right from within the Azure portal.
We just shipped 1.1.0 Preview 1 of Azure SignalR Service SDK to support some new features in ASP.NET Core 3.0, including endpoint routing and server-side Blazor. Let’s take a look how you can use them in your Azure SignalR application.
Here is the list of what’s new in this release:
Endpoint routing support for ASP.NET Core 3
Use SignalR service in server-side Blazor apps
Server stickiness
Endpoint routing support for ASP.NET Core 3
For those who are using Azure SignalR, you should be familiar with AddAzureSignalR() and UseAzureSignalR(). These two methods are required if you want to switch your app server from self-hosted SignalR to use Azure SignalR.
A typical Azure SignalR application usually looks like this in Startup.cs (note where AddAzureSignalR() and UseAzureSignalR() are used):
ASP.NET Core 3.0 introduced a new endpoint routing support which allows routable things like MVC and SignalR to be mixed together in a unified UseEndpoints() interface.
For example, you can call MapGet() and MapHub() in a single UseEndpoints() call, like this:
The only change you need to make is to call AddAzureSignalR() after AddSignalR().
This will be very useful in the case that SignalR is deeply integrated in your code base or the library you’re using. For example, when you’re using server-side Blazor.
Use SignalR service in server-side Blazor apps
Server-side Blazor is a new way to build interactive client-side web UI in ASP.NET Core 3. In server-side Blazor, UI updates are rendered at server side, then sent to browser through a SignalR connection. Since it uses SignalR, there is a natural need to use Azure SignalR service to handle the SignalR traffic so your application can easily scale.
If you look at some server-side Blazor code samples, you’ll see they have a call to MapBlazorHub() to setup the communication channel between client and server.
The implementation of this method calls MapHub() to create a SignalR hub at server side. Before this release there is no way to change the implementation of MapBlazorHub() to use SignalR service. Now if you call AddAzureSignalR(), MapBlazorHub() will also use SignalR service to host the hub instead of hosting it on the server.
Please follow these steps to change your server-side Blazor app to use SignalR service:
Open your Startup.cs, add services.AddSignalR().AddAzureSignalR() in ConfigureServices().
Create a new SignalR service instance.
Get connection string and set it to environment variable Azure:SignalR:ConnectionString.
Then run your app you’ll see the WebSocket connection is going through SignalR service.
The typical connection flow when using SignalR service is that client first negotiates with app server to get the url of SignalR service, then service routes client to app server.
When you have multiple app servers, there is no guarantee that two servers (the one who does negotiation and the one who gets the hub invocation) will be the same one.
We hear a lot of customers asking about whether it’s possible to make the two servers the same one so they can share some states between negotiation and hub invocation. In this release we have added a new “server sticky mode” to support this scenario.
To enable this, you just need to set ServerStickyMode to Required in AddAzureSignalR():
Now for any connection, SignalR service will guarantee negotiation and hub invocation go to the same app server (called “server sticky”).
This feature is very useful when you have client state information maintained locally on the app server. For example, when using server-side Blazor, UI state is maintained at server side so you want all client requests go to the same server including the SignalR connection. So you need to set server sticky mode to Required when using server-side Blazor together with SignalR service.
Please note in this mode, there may be additional cost for the service to route connection to the right app server. So there may be some negative impact in message latency. If you don’t want the performance penalty, there is another Preferred mode you can use. In this mode stickiness is not always guaranteed (only when there is no additional cost to do the routing). But you can still gain some performance benefits as message delivery is more efficient if sender and receiver are on the same app server. Also when sticky mode is enabled, service won’t balance connections between app servers (by default SignalR service balances the traffic by routing to a server with least connections). So we recommend to set sticky mode to Disabled (this is also the default value) and only enable it when there is a need.
You can refer to this doc for more details about server sticky mode.
In today’s cloud-driven world, employees are only allowed access to data that is absolutely necessary for them to effectively perform their job. This limited access is especially important in scenarios where it's difficult to monitor access behaviors, like if you have many employees and/or engage vendors. Access is usually based on the job responsibility, authority, and capability. As a result, some job profiles will not have access to certain data or rights to perform specific actions if they do not need it to fulfill their responsibilities. The ability to hence control access but still be able to perform job duties aligning to the infrastructure administrator profile is becoming more relevant and frequently requested by customers.
You asked, we listened!
When we released the automatic update of agents used in disaster recovery (DR) of Azure Virtual Machines (VMs), the most frequent feedback we received was related to access control. Customers had DR admins who were given just enough rights to execute operations to enable, failover, or test DR. While they wanted to enable automatic updates and avoid the hassle of having to monitor for monthly updates and manually upgrade the agents, they didn't want to give the DR admin contributor access to the subscription, which would allow them to create automation accounts. The request we heard from you was to allow customers to provide an existing automation account, approved and created by a person who is entrusted with the right access in the subscription. This automation account could then be used to execute the runbook, which checks for new updates and upgrades the existing agent every time there is a new release.
How to choose an existing automation account?
Choose the virtual machine you want to enable replication for.
In the Advanced Settings blade, under Extension Settings, choose a previously created Automation account.
This automation account can be used to automatically update agents for all Azure virtual machines within the Recovery Services vault. If you change it for one virtual machine, the same will be applied to all virtual machines.
Please note that this capability is only applicable for disaster recovery of Azure virtual machines, and not for Hyper-V/VMware VMs
In addition to this, we recently announced one of the top customer requests we've received, which provides better control of your workloads:
Enable replication for a newly added disk – You can enable replication for a data disk that's been newly added to an Azure VM that's already configured for disaster recovery.
Azure natively provides you the high availability and reliability for your mission-critical workloads, and you can choose to improve your protection and meet compliance requirements using the disaster recovery provided by Azure Site Recovery. Getting started with Azure Site Recovery is easy, check out pricing information and sign up for a free Microsoft Azure trial. You can also visit the Azure Site Recovery forum on MSDN for additional information and to engage with other customers.
Today marks an exciting milestone for Microsoft's commerce ecosystem evolution as we extend our sales channels to benefit our partners and, ultimately, our customers. With audience-specific web stores AppSource and Azure Marketplace, thousands of partners in the Cloud Solution Provider program, and a world-class enterprise sales team, Microsoft provides you with both cloud technologies and go-to-market motions.
As announced at Microsoft Build, we’re rolling out new capabilities to simplify publisher sign-up and account management, unlock powerful Software-as-a-Service (SaaS) business models, and improve product discovery and purchasing. We look forward to sharing many more updates and announcements at Microsoft Inspire in July, and hope to see you there!
New partner center tooling for offer management and account settings
Whether your organization is new to Microsoft’s commercial marketplace, is well-established, or evaluating it’s options, the updates to sign-up, publishing, and offer management experiences will make life much easier.
Effective immediately, all new publisher accounts are created through a simplified and streamlined process in Partner Center. Registering as a Microsoft partner, getting a Microsoft partner network ID, and creating your publisher profile is easy and intuitive. Sign up for the Microsoft commercial marketplace program today.
If you have an existing account in the Cloud Partner Portal, you’ll receive an email inviting you to activate your account in Partner Center. From that point forward, your account and publisher information will be mastered in Partner Center and synchronized to Cloud Partner Portal.
Starting with SaaS offers and then expanding to other offer types in waves, the offer creation and management will take place in Partner Center. But don't worry—the coexistence experience will be seamless, with automatic transition between portals without the need to open a new window. Learn more about the transition from Cloud Partner Portal to Partner Center.
Commercial marketplace overview page within Partner Center
Software-as-a-Service user licensing
In 2018, Microsoft introduced the ability for partners to offer flat fee (also known as site-based) SaaS offerings on a monthly basis in the commercial marketplace, and in March 2019, we added annual billing capabilities. Now we’re introducing the option to offer seat-based SaaS subscriptions, which are licensed per user. Using Azure Active Directory for single sign on and the marketplace SaaS Fulfillment API, Microsoft's commercial marketplace is optimized for partners delivering business solutions.
Within Microsoft's commercial marketplace, SaaS is defined as a software solution that’s developed, deployed, managed, updated, and supported within the ISV's infrastructure or Azure subscription.
Software-as-a-Service plan creation in Partner Center showing seat based billing.
AppSource purchases
To complement the availability of a business solution licensing model, we’re excited to introduce a purchase experience directly in the business and industry web store, AppSource. With simply a credit card and Azure Active Directory account, customers can discover, evaluate, and purchase SaaS solutions provided by Microsoft's partners.
Coming soon
Our teams are busy working on many more capabilities and enhancements that we’re excited to release in the future. Learn more about the commercial marketplace roadmap.
Have questions or feedback? Join the conversation in the marketplace section of the Microsoft Partner Community.
On June 3rd 2009, we debuted to the world with a fresh approach to search – one that was anchored around the mission of empowering people through knowledge - helping you do more, not just search more. A lot has happened since then, so we wanted to take a moment to look back and celebrate you - the people who have transformed the pursuit of answers into understanding, perspective and action.
From the beginning we knew search needed to become more than just a list of blue links. We continued to iterate and innovate across the search experience; in 2017, we recognized a similar unmet need – search should give you answers faster, be more comprehensive and allow everyone to engage more naturally. Search needed to become more intelligent.
We made a commitment to invest in search experiences that help people discover facts, uncover multiple perspectives, find better options, and see the bigger picture.
These advancements are enabled by Microsoft Research labs, deep neural networks, state of the art machine learning and passionate teams working to deliver the best search experience possible.
Through June 17th, we are celebrating by sharing some of the most popular homepage images over the last 10 years. Of course, it wouldn’t be a birthday without a gift or two. The #BingIs10 retrospective experience contains a treasure trove of hidden gems, including prizes, Microsoft Rewards points, downloadable wallpaper packs and more.
We want to thank the millions of people who use Bing every day. Together, we will continue to develop search experiences that prioritize you, give you answers with less effort and spark curiosity.
Writing good documentation is hard. Tools can’t solve this problem in themselves, but they can ease the pain. This post will show you how to use Sphinx to generate attractive, functional documentation for C++ libraries, supplied with information from Doxygen. We’ll also integrate this process into a CMake build system so that we have a unified workflow.
For an example of a real-world project whose documentation is built like this, see fmtlib.
Why Sphinx?
Doxygen has been around for a couple of decades and is a stable, feature-rich tool for generating documentation. However, it is not without its issues. Docs generated with Doxygen tend to be visually noisy, have a style out of the early nineties, and struggle to clearly represent complex template-based APIs. There are also limitations to its markup. Although they added Markdown support in 2012, Markdown is simply not the best tool for writing technical documentation since it sacrifices extensibility, featureset size, and semantic markup for simplicity.
Sphinx instead uses reStructuredText, which has those important concepts which are missing from Markdown as core ideals. One can add their own “roles” and “directives” to the markup to make domain-specific customizations. There are some great comparisons of reStructuredText and Markdown by Victor Zverovich and Eli Bendersky if you’d like some more information.
The docs generated by Sphinx also look a lot more modern and minimal when compared to Doxygen and it’s much easier to swap in a different theme, customize the amount of information which is displayed, and modify the layout of the pages.
On a more fundamental level, Doxygen’s style of documentation is listing out all the API entities along with their associated comments in a more digestible, searchable manner. It’s essentially paraphrasing the header files, to take a phrase from Robert Ramey[1]; embedding things like rationale, examples, notes, or swapping out auto-generated output for hand-written is not very well supported. In Sphinx however, the finer-grained control gives you the ability to write documentation which is truly geared towards getting people to learn and understand your library.
If you’re convinced that this is a good avenue to explore, then we can begin by installing dependencies.
Install Dependencies
Doxygen
Sphinx doesn’t have the ability to extract API documentation from C++ headers; this needs to be supplied either by hand or from some external tool. We can use Doxygen to do this job for us. Grab it from the official download page and install it. There are binaries for Windows, Linux (compiled on Ubuntu 16.04), and MacOS, alongside source which you can build yourself.
Sphinx
Pick your preferred way of installing Sphinx from the official instructions. It may be available through your system package manager, or you can get it through pip.
Read the Docs Sphinx Theme
I prefer this theme to the built-in ones, so we can install it through pip:
> pip install sphinx_rtd_theme
Breathe
Breathe is the bridge between Doxygen and Sphinx; taking the output from the former and making it available through some special directives in the latter. You can install it with pip:
#pragma once
/**
A fluffy feline
*/
struct cat {
/**
Make this cat look super cute
*/
void make_cute();
};
CatCutifier/CatCutifier/CMakeLists.txt
add_library (CatCutifier "CatCutifier.cpp" "CatCutifier.h")
target_include_directories(CatCutifier PUBLIC .)
If you now build your project, you should get a CatCutifier library which someone could link against and use.
Now that we have our library, we can set up document generation.
Set up Doxygen
If you don’t already have Doxygen set up for your project, you’ll need to generate a configuration file so that it knows how to generate docs for your interfaces. Make sure the Doxygen executable is on your path and run:
> mkdir docs
> cd docs
> doxygen.exe -g
You should get a message like:
Configuration file `Doxyfile' created.
Now edit the configuration file and enter
doxygen Doxyfile
to generate the documentation for your project
We can get something generated quickly by finding the INPUT variable in the generated Doxyfile and pointing it at our code:
INPUT = ../CatCutifier
Now if you run:
> doxygen.exe
You should get an html folder generated which you can point your browser at and see some documentation like this:
We’ve successfully generated some simple documentation for our class by hand. But we don’t want to manually run this command every time we want to rebuild the docs; this should be handled by CMake.
Doxygen in CMake
To use Doxygen from CMake, we need to find the executable. Fortunately CMake provides a find module for Doxygen, so we can use find_package(Doxygen REQUIRED) to locate the binary and report an error if it doesn’t exist. This will store the executable location in the DOXYGEN_EXECUTABLE variable, so we can add_custom_command to run it and track dependencies properly:
find_package(Doxygen REQUIRED)
# Find all the public headers
get_target_property(CAT_CUTIFIER_PUBLIC_HEADER_DIR CatCutifier INTERFACE_INCLUDE_DIRECTORIES)
file(GLOB_RECURSE CAT_CUTIFIER_PUBLIC_HEADERS ${CAT_CUTIFIER_PUBLIC_HEADER_DIR}/*.h)
#This will be the main output of our command
set(DOXYGEN_INDEX_FILE ${CMAKE_CURRENT_SOURCE_DIR}/html/index.html)
add_custom_command(OUTPUT ${DOXYGEN_INDEX_FILE}
DEPENDS ${CAT_CUTIFIER_PUBLIC_HEADERS}
COMMAND ${DOXYGEN_EXECUTABLE} Doxyfile
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
MAIN_DEPENDENCY Doxyfile
COMMENT "Generating docs")
add_custom_target(Doxygen ALL DEPENDS ${DOXYGEN_INDEX_FILE})
The final custom target makes sure that we have a target name to give to make and that dependencies will be checked for a rebuild whenever we Build All or do a bare make.
We also want to be able to control the input and output directories from CMake so that we’re not flooding our source directory with output files. We can do this by adding some placeholders to our Doxyfile (we’ll rename it Doxyfile.in to follow convention) and having CMake fill them in with configure_file:
find_package(Doxygen REQUIRED)
# Find all the public headers
get_target_property(CAT_CUTIFIER_PUBLIC_HEADER_DIR CatCutifier INTERFACE_INCLUDE_DIRECTORIES)
file(GLOB_RECURSE CAT_CUTIFIER_PUBLIC_HEADERS ${CAT_CUTIFIER_PUBLIC_HEADER_DIR}/*.h)
set(DOXYGEN_INPUT_DIR ${PROJECT_SOURCE_DIR}/CatCutifier)
set(DOXYGEN_OUTPUT_DIR ${CMAKE_CURRENT_BINARY_DIR}/docs/doxygen)
set(DOXYGEN_INDEX_FILE ${DOXYGEN_OUTPUT_DIR}/html/index.html)
set(DOXYFILE_IN ${CMAKE_CURRENT_SOURCE_DIR}/Doxyfile.in)
set(DOXYFILE_OUT ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile)
#Replace variables inside @@ with the current values
configure_file(${DOXYFILE_IN} ${DOXYFILE_OUT} @ONLY)
file(MAKE_DIRECTORY ${DOXYGEN_OUTPUT_DIR}) #Doxygen won't create this for us
add_custom_command(OUTPUT ${DOXYGEN_INDEX_FILE}
DEPENDS ${CAT_CUTIFIER_PUBLIC_HEADERS}
COMMAND ${DOXYGEN_EXECUTABLE} ${DOXYFILE_OUT}
MAIN_DEPENDENCY ${DOXYFILE_OUT} ${DOXYFILE_IN}
COMMENT "Generating docs")
add_custom_target(Doxygen ALL DEPENDS ${DOXYGEN_INDEX_FILE})
Now we can generate our documentation as part of our build system and it’ll only be generated when it needs to be. If you’re happy with Doxygen’s output, you could just stop here, but if you want the additional features and attractive output which reStructuredText and Sphinx give you, then read on.
Setting up Sphinx
Sphinx provides a nice startup script to get us going fast. Go ahead and run this:
> cd docs
> sphinx-quickstart.exe
Keep the defaults and put in your name and the name of your project. Now if you run make html you should get a _build/html folder you can point your browser at to see a welcome screen.
I’m a fan of the Read the Docs theme we installed at the start, so we can use that instead by changing html_theme in conf.py to be ‘sphinx_rtd_theme’. That gives us this look:
Before we link in the Doxygen output to give us the documentation we desire, lets automate the Sphinx build with CMake
Sphinx in CMake
Ideally we want to be able to write find_package(Sphinx REQUIRED) and have everything work. Unfortunately, unlike Doxygen, Sphinx doesn’t have a find module provided by default, so we’ll need to write one. Fortunately, we can get away with doing very little work:
CatCutifier/cmake/FindSphinx.cmake
#Look for an executable called sphinx-build
find_program(SPHINX_EXECUTABLE
NAMES sphinx-build
DOC "Path to sphinx-build executable")
include(FindPackageHandleStandardArgs)
#Handle standard arguments to find_package like REQUIRED and QUIET
find_package_handle_standard_args(Sphinx
"Failed to find sphinx-build executable"
SPHINX_EXECUTABLE)
With this file in place, find_package will work so long as we tell CMake to look for find modules in that directory:
CatCutifier/CMakeLists.txt
cmake_minimum_required (VERSION 3.8)
project ("CatCutifier")
# Add the cmake folder so the FindSphinx module is found
set(CMAKE_MODULE_PATH "${PROJECT_SOURCE_DIR}/cmake" ${CMAKE_MODULE_PATH})
add_subdirectory ("CatCutifier")
add_subdirectory ("docs")
Now we can find this executable and call it:
CatCutifier/docs/CMakeLists.txt
find_package(Sphinx REQUIRED)
set(SPHINX_SOURCE ${CMAKE_CURRENT_SOURCE_DIR})
set(SPHINX_BUILD ${CMAKE_CURRENT_BINARY_DIR}/docs/sphinx)
add_custom_target(Sphinx ALL
COMMAND
${SPHINX_EXECUTABLE} -b html
${SPHINX_SOURCE} ${SPHINX_BUILD}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Generating documentation with Sphinx")
If you run a build you should now see Sphinx running and generating the same blank docs we saw earlier.
Now we have the basics set up, we need to hook Sphinx up with the information generated by Doxygen. We do that using Breathe.
Setting up Breathe
Breathe is an extension to Sphinx, so we set it up using the conf.py which was generated for us in the last step:
You might wonder why it’s necessary to explicitly state what entities we wish to document and where, but this is one of the key benefits of Sphinx. This allows us to add as much additional information (examples, rationale, notes, etc.) as we want to the documentation without having to shoehorn it into the source code, plus we can make sure it’s displayed in the most accessible, understandable manner we can. Have a look through Breathe’s directives and Sphinx’s built-in directives, and Sphinx’s C++-specific directives to get a feel for what’s available.
Now we update our Sphinx target to hook it all together by telling Breathe where to find the Doxygen output:
CatCutifier/docs/CMakeLists.txt
#...
find_package(Sphinx REQUIRED)
set(SPHINX_SOURCE ${CMAKE_CURRENT_SOURCE_DIR})
set(SPHINX_BUILD ${CMAKE_CURRENT_BINARY_DIR}/docs/sphinx)
add_custom_target(Sphinx ALL
COMMAND ${SPHINX_EXECUTABLE} -b html
# Tell Breathe where to find the Doxygen output
-Dbreathe_projects.CatCutifier=${DOXYGEN_OUTPUT_DIR}
${SPHINX_SOURCE} ${SPHINX_BUILD}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
COMMENT "Generating documentation with Sphinx")
Hooray! You should now have some nice Sphinx documentation generated for you:
Finally, we can make sure all of our dependencies are right so that we never rebuild the Doxygen files or the Sphinx docs when we don’t need to:
CatCutifier/docs/CMakeLists.txt
find_package(Doxygen REQUIRED)
find_package(Sphinx REQUIRED)
# Find all the public headers
get_target_property(CAT_CUTIFIER_PUBLIC_HEADER_DIR CatCutifier INTERFACE_INCLUDE_DIRECTORIES)
file(GLOB_RECURSE CAT_CUTIFIER_PUBLIC_HEADERS ${CAT_CUTIFIER_PUBLIC_HEADER_DIR}/*.h)
set(DOXYGEN_INPUT_DIR ${PROJECT_SOURCE_DIR}/CatCutifier)
set(DOXYGEN_OUTPUT_DIR ${CMAKE_CURRENT_BINARY_DIR}/doxygen)
set(DOXYGEN_INDEX_FILE ${DOXYGEN_OUTPUT_DIR}/xml/index.xml)
set(DOXYFILE_IN ${CMAKE_CURRENT_SOURCE_DIR}/Doxyfile.in)
set(DOXYFILE_OUT ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile)
# Replace variables inside @@ with the current values
configure_file(${DOXYFILE_IN} ${DOXYFILE_OUT} @ONLY)
# Doxygen won't create this for us
file(MAKE_DIRECTORY ${DOXYGEN_OUTPUT_DIR})
# Only regenerate Doxygen when the Doxyfile or public headers change
add_custom_command(OUTPUT ${DOXYGEN_INDEX_FILE}
DEPENDS ${CAT_CUTIFIER_PUBLIC_HEADERS}
COMMAND ${DOXYGEN_EXECUTABLE} ${DOXYFILE_OUT}
MAIN_DEPENDENCY ${DOXYFILE_OUT} ${DOXYFILE_IN}
COMMENT "Generating docs"
VERBATIM)
# Nice named target so we can run the job easily
add_custom_target(Doxygen ALL DEPENDS ${DOXYGEN_INDEX_FILE})
set(SPHINX_SOURCE ${CMAKE_CURRENT_SOURCE_DIR})
set(SPHINX_BUILD ${CMAKE_CURRENT_BINARY_DIR}/sphinx)
set(SPHINX_INDEX_FILE ${SPHINX_BUILD}/index.html)
# Only regenerate Sphinx when:
# - Doxygen has rerun
# - Our doc files have been updated
# - The Sphinx config has been updated
add_custom_command(OUTPUT ${SPHINX_INDEX_FILE}
COMMAND
${SPHINX_EXECUTABLE} -b html
# Tell Breathe where to find the Doxygen output
-Dbreathe_projects.CatCutifier=${DOXYGEN_OUTPUT_DIR}/xml
${SPHINX_SOURCE} ${SPHINX_BUILD}
WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
DEPENDS
# Other docs files you want to track should go here (or in some variable)
${CMAKE_CURRENT_SOURCE_DIR}/index.rst
${DOXYGEN_INDEX_FILE}
MAIN_DEPENDENCY ${SPHINX_SOURCE}/conf.py
COMMENT "Generating documentation with Sphinx")
# Nice named target so we can run the job easily
add_custom_target(Sphinx ALL DEPENDS ${SPHINX_INDEX_FILE})
# Add an install target to install the docs
include(GNUInstallDirs)
install(DIRECTORY ${SPHINX_BUILD}
DESTINATION ${CMAKE_INSTALL_DOCDIR})
Try it out and see what gets rebuilt when you change a file. If you change Doxyfile.in or a header file, all the docs should get rebuilt, but if you only change the Sphinx config or reStructuredText files then the Doxygen build should get skipped.
This leaves us with an efficient, automated, powerful documentation system.
If you already have somewhere to host the docs or want developers to build the docs themselves then we’re finished. If not, you can host them on Read the Docs, which provides free hosting for open source projects.
Setting up Read the Docs
To use Read the Docs (RtD) you need to sign up (you can use GitHub, GitLab or Bitbucket to make integration easy). Log in, import your repository, and your docs will begin to build!
Unfortunately, it will also fail:
Traceback (most recent call last): File "/home/docs/checkouts/readthedocs.org/user_builds/cpp-documentation-example/envs/latest/lib/python3.7/site-packages/sphinx/registry.py", line 472, in load_extension mod = __import__(extname, None, None, ['setup']) ModuleNotFoundError: No module named 'breathe'
To tell RtD to install Breathe before building, we can add a requirements file:
CatCutifier/docs/requirements.txt
breathe
Another issue is that RtD doesn’t understand CMake: it’s finding the Sphinx config file and running that, so it won’t generate the Doxygen information. To generate this, we can add some lines to our conf.py script to check if we’re running in on the RtD servers and, if so, hardcode some paths and run Doxygen:
CatCutifier/docs/conf.py
import subprocess, os
def configureDoxyfile(input_dir, output_dir):
with open('Doxyfile.in', 'r') as file :
filedata = file.read()
filedata = filedata.replace('@DOXYGEN_INPUT_DIR@', input_dir)
filedata = filedata.replace('@DOXYGEN_OUTPUT_DIR@', output_dir)
with open('Doxyfile', 'w') as file:
file.write(filedata)
# Check if we're running on Read the Docs' servers
read_the_docs_build = os.environ.get('READTHEDOCS', None) == 'True'
breathe_projects = {}
if read_the_docs_build:
input_dir = '../CatCutifier'
output_dir = 'build'
configureDoxyfile(input_dir, output_dir)
subprocess.call('doxygen', shell=True)
breathe_projects['CatCutifier'] = output_dir + '/xml'
# ...
Push this change and…
Lovely documentation built automatically on every commit.
Conclusion
All this tooling takes a fair amount of effort to set up, but the result is powerful, expressive, and accessible. None of this is a substitute for clear writing and a strong grasp of what information a user of a library needs to use it effectively, but our new system can provide support to make this easier for developers.
Resources
Thank you to the authors and presenters of these resources, which were very helpful in putting together this post and process: