Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

An experiment – The Azure Cloud Shell at the command line with AZ SHELL

$
0
0

I've blogged before about the Azure Cloud Shell. It's super cool and you can get your own easily in any browse by hitting https://shell.azure.com. You can have either bash or powershell, and you get a shared "cloud drive" that is persisted between sessions.

If you have Visual Studio Code you can get an Azure Cloud Shell integrated within VSCode by just installing Visual Studio Code and adding the Azure Account Extension.

I recently got a build of the new open source Windows Terminal on my machine and I set up some profiles with tabs for DOS, PowerShell, VS2019, Ubuntu but something was missing. Why can't I get my Azure Cloud Shell?

Sure, I can fire up a VM and ssh into it. But Azure Cloud Shell spins up a free container with a persistent cloud drive AND has a bunch of developer tools like python, node, dotnet, and go already installed. I'd love to use it! But it's not a VM and the container isn't exposed with SSH. Instead, we'll want to spin the Azure Cloud Shell up the same way the https://shell.azure.com site does, with web calls and web sockets. So...why not do it?

image

I thought I was pretty clever when I had this idea so I started a C# implementation myself. Then I talked to Anders Liu from work about how to do it right, and over the weekend he beat me to it with his own VERY nice and clean implementation in Go that he put on his github at https://github.com/yangl900/azshell. We shared this on an internal alias and found out that Noel Bundick had the same great idea and put it in his Az CLI extensions pack (which has a ton of other cool stuff you should see). Anders' is standalone and Noel's is an Az CLI extension.

Either way, we all together think this idea has merit and maybe it should be an official thing! What do you think? Regardless maybe it doesn't need to be since you can try it today with these open source options.

Just put "azshell.exe" in your PATH and make sure you have the latest Azure CLI installed and you're logged in.

By the way, you can also get a Cloud Shell inside the Portal. In fact there's a button for it at the top that looks like >_ Personally I think with the addition of "az shell" (or in this case, azshell.exe) from the command line) it completes the circle in a really cool way.

image

Let me know what you think in the comments!


Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.



© 2018 Scott Hanselman. All rights reserved.
     

What’s new in Azure Monitor

$
0
0

At Ignite 2018, we shared the vision to bring monitoring infrastructure, applications, and the network into one unified offering, providing full stack monitoring for your applications. Over last few months, individual capabilities such as Application Insights and Azure Monitor logs have come together to provide a seamless and integrated Azure Monitor experience.

We’d like to share our three favorites:

  • End-to-end monitoring for Azure Kubernetes Service
  • Integrated access control for logs
  • Intelligent scalable alerts

End-to-end monitoring for Azure Kubernetes Service

Today, Azure Kubernetes Service (AKS) customers rely on Azure Monitor for containers to get out of the box monitoring for the AKS clusters. Kubernetes event logs are now available in real-time in addition to live container logs. You can now filter the charts and metrics for specific AKS node pools and see Node Storage Capacity metrics when you drill down into node details.

Live Kubernetes Events

For monitoring your applications running on AKS, you can just instrument with Application Insights SDKs, but if you cannot instrument your workloads (for example, you may be running a legacy app or a third party app), we now have an alternative that doesn’t require any instrumentation! Application Insights can leverage your existing service mesh investments, the preview currently supports Istio, and provide application monitoring for AKS without any modification to your app’s code. This enables you to immediately start taking advantage of out-of-the-box capabilities like Application Map, Live Metrics Stream, Application Dashboards, Workbooks, User Behavior Analytics, and more.

Application Map for AKS cluster that leverages zero onboarding

Through a combined view of application and infrastructure, Azure Monitor now provides a full-stack monitoring view of Kubernetes clusters.

Integrated access control for logs

Azure Monitor is the central platform for collecting logs across monitoring, management, security, and other log types in Azure. Customers love the powerful, embedded Azure Monitor Logs experience that allows you to run diagnostics, root-cause analysis, statistics, visualizations, and answer any other ad-hoc questions. One of the challenges that customers were facing was configuring access control based on a resource. For example, how do you ensure that anyone who has access to virtual machine (VM) also has access to the logs generated by the VM. In line with our vision to provide you a seamless and native monitoring experience, we are now providing granular role-based access control for logs that help you cascade the permissions you have set at a resource level down to the operational logs.

Use granular RBAC for logs

Users can now also access logs scoped to their resource, allowing them to explore and query logs without needing the understand the entire workspace structure.

Directly access logs scoped to the resource

Intelligent and scalable alerts

Metric Alerts with Dynamic Thresholds, now generally available, enables Azure Monitor to determine the right thresholds for alert rules. Multi-resource alerts makes it easy to create a single alert rule and apply across multiple VMs.

Configuring Dynamic Thresholds

The new Action Rules, available in preview, add more flexibility and finer controls for Action Groups. With Action Rules, scaling Action Groups to suppress alerts during a maintenance window is easy to do with a couple clicks

Configuring Suppression with Action Rules

We shared three examples of how we are making Azure Monitor integrated, intelligent, and scalable, but that’s only a part of the story. Here is a list of other exciting announcements coming to you from Build.

  • Preview of Azure Monitor application change analysis, providing a centralized view and analysis of changes at different layers of a web app. The first iteration of this feature is now available in the App Services Diagnose and Solve Problems experience.
  • Improved visualizations in Application Map with better filtering to quickly scope to specific components, and ability to group/expand common dependencies, including Azure Functions v2.
  • Improved codeless instrumentation experience for ASP.NET apps on IIS with the preview of Status Monitor v2. This enables clean redeployments, the latest SDKs, support for TLS 1.2, offline install support, and more!
  • Application Insights SDK for Java workloads now fully supports W3C and provides monitoring support for Async Java apps with manual instrumentation API. Improved ILogger logs collection support for .NET Core apps and support for Live Metrics Stream for Node.JS apps.
  • Workbooks are now first-class citizen for Azure Monitor and available in the main menu. Use the sample templates to customize interactive reports or troubleshooting guides with rich text, analytics queries, metrics, and various parameters across your apps and infrastructure resources! New templates also available supporting Azure Monitor for VMs to monitor open ports and their connections

Get monitoring

Azure Monitor is constantly evolving to discover new insights and reduce potential issues with applications. Find the latest updates for Azure Monitor in the Azure portal. We want to hear from you! Ask questions or provide feedback.

Ready to get started?

Azure IoT at Build: making IoT solutions easier to develop, more powerful to use

$
0
0

Ran Tech / Microsoft 3/9/17

IoT is transforming every business on the planet, and that transformation is accelerating. Companies are harnessing billions of IoT devices to help them find valuable insights into critical parts of their business that were previously not connected—how customers are using their products, when to service assets before they break down, how to reduce energy consumption, how to optimize operations, and thousands of other user cases limited only by companies’ imagination.

Microsoft is leading in IoT because we’re passionate about simplifying IoT so any company can benefit from it quickly and securely.

Last year we announced a $5 billion commitment, and this year we highlighted the momentum we are seeing in the industry. This week, at our premier developer conference, Microsoft Build in Seattle, we’re thrilled to share our latest innovations that further simplify IoT and dramatically accelerate time to value for customers and partners.

Accelerating IoT

Developing a cloud-based IoT solution with Azure IoT has never been faster or more secure, yet we’re always looking for ways to make it easier. From working with customers and partners, we’ve seen an opportunity to accelerate on the device side.

Part of the challenge we see is the tight coupling between the software written on devices and the software that has to match it in the cloud. To illustrate this, it’s worth looking at a similar problem from the past and how it was solved.

Early versions of Windows faced a challenge in supporting a broad set of connected devices like keyboards and mice. Each device came with its own software, which had to be installed on Windows for the device to function. The software on the device and the software that had to be installed on Windows had a tight coupling, and this tight coupling made the development process slow and fragile for device makers.

Windows solved this with Plug and Play, which at its core was a capability model that devices could declare and present to Windows when they were connected. This capability model made it possible for thousands of different devices to connect to Windows and be used without any software having to be installed on Windows.

IoT Plug and Play

Late last week, we announced IoT Plug and Play, which is based on an open modeling language that allows IoT devices to declare their capabilities. That declaration, called a device capability model, is presented when IoT devices connect to cloud solutions like Azure IoT Central and partner solutions, which can then automatically understand the device and start interacting with it—all without writing any code.

IoT Plug and Play also enables our hardware partners to build IoT Plug and Play compatible devices, which can then be certified with our Azure Certified for IoT program and used by customers and partners right away. This approach works with devices running any operating system, be it Linux, Android, Azure Sphere OS, Windows IoT, RTOSs, and more. And all of our IoT Plug and Play support is open source as always.

Finally, Visual Studio Code will support modeling an IoT Plug and Play device capability model as well as generating IoT device software based on that model, which dramatically accelerates IoT device software development.

We’ll be demonstrating IoT Plug and Play at Build, and it will be available in preview this summer. To design IoT Plug and Play, we’ve worked with a large set of launch partners to ensure their hardware is certified ready:

Build_IoT_replace

Certified-ready devices are now published in the Azure IoT Device Catalog for the Preview, and while Azure IoT Central and Azure IoT Hub will be the first services integrated with IoT Plug and Play, we will add support for Azure Digital Twins and other solutions in the months to come. Watch this video to learn more about IoT Plug and Play and read this blog post for more details on IoT Plug and Play support in Azure IoT Central.

Announcing IoT Plug and Play connectivity partners

With increased options for low-power networking, the role of cellular technologies in IoT projects is on the rise. Today we’re introducing IoT Plug and Play connectivity partners. Deep integration between these partners’ technologies and Azure IoT simplifies customer deployments and adds new capabilities.

This week at Build, we are highlighting the first of these integrations, which leverages Trust Onboard from Twilio. The integration uses security features built into the SIM to automatically authenticate and connect to Azure, providing a secure means of uniquely identifying IoT devices that work with current manufacturing processes.

These are some of the many connectivity partners we are working with:

Build_IoT_2

Making Azure IoT Central more powerful for developers

Last year we announced the general availability of Azure IoT Central, which enables customers and partners to provision an IoT application in 15 seconds, customize it in hours, and go to production the same day—all without writing code in the cloud.

While many customers build their IoT solutions directly on our Azure IoT platform services, we’re seeing an upswell in customers and partners that like the rapid application development Azure IoT Central provides. And, of course, Azure IoT Central is built on the same great Azure IoT platform services.

Today at Build, we’re announcing a set of new features that speak to how we’re enabling and simplifying Azure IoT Central for developers. We’ll show some of these innovations, such as new personalization features that make it easy for customers and partners to modify Azure IoT Central’s UI to conform with their own look and feel. In the Build keynote, we’ll show how Starbucks is using this personalization feature for their Azure IoT Central solution connected to Azure Sphere devices in their stores.

We’ll also demonstrate Azure IoT Central working with IoT Plug and Play to show how fast and easy this makes it to build an end-to-end IoT solution, with Microsoft still wearing the pager and keeping everything up and running so customers and partners can focus on the benefits IoT provides. Watch this video to learn more about Azure IoT Central announcements.

The growing Azure Sphere hardware ecosystem

Azure Sphere is Microsoft’s comprehensive solution for easily creating secured MCU-powered IoT devices. Azure Sphere is an integrated system that includes MCUs with built-in Microsoft security technology, an OS based on a custom Linux kernel, and a cloud-based security service. Azure Sphere delivers secured communications between device and cloud, device authentication and attestation, and ongoing OS and security updates. Azure Sphere provides robust defense-in-depth device security to limit the reach and impact of remote attacks and to renew device health through security updates.

At Build this week, we’ll showcase a new set of solutions such as hardware modules that speed up time to market for device makers, development kits that help organizations prototype quickly, and our new guardian modules.

Guardian modules are a new class of device built on Azure Sphere that protect brownfield equipment, mitigating risks and unlocking the benefits of IoT. They attach physically to brownfield equipment with no equipment redesign required, processing data and controlling devices without ever exposing vital operational equipment to the network. Through guardian modules, Azure Sphere secures brownfield devices, protects operational equipment from disabling attacks, simplifies device retrofit projects, and boosts equipment efficiency through over-the-air updates and IoT connectivity.

The seven modules and devkits on display at Build are:

  • Avnet Guardian Module. Unlocks brownfield IoT by bringing Azure Sphere’s security to equipment previously deemed too critical to be connected. Available soon.
  • Avnet MT3620 Starter Kit. Azure Sphere prototyping and development platform. Connectors allow easy expandability options with a range of MikroE Click and Grove modules. Available May 2019.
  • Avnet Wi-Fi Module. Azure Sphere-based module designed for easy final product assembly. Simplifies quality assurance with stamp hole (castellated) pin design. Available June 2019.
  • AI-Link WF-M620-RSC1 Wi-Fi Module. Designed for cost-sensitive applications. Simplifies quality assurance with stamp hole (castellated) pin design. Available now.
  • SEEED MT3620 Development Board. Designed for comprehensive prototyping. Available expansion shields enable Ethernet connectivity and support for Grove modules. Available now.
  • SEEED MT3620 Mini Development Board. Designed for size-constrained prototypes. Built on the AI-Link module for a quick path from prototype to commercialization. Available May 2019.
  • USI Dual Band Wi-Fi + Bluetooth Combo Module. Supports BLE and Bluetooth 5 Mesh. Can also work as an NFC tag (for non-contact Bluetooth pairing and device provisioning). Available soon.

For those who want to learn more about the modules, you can find specs for each and links to more information on our Azure Sphere hardware ecosystem page.

See Azure Sphere in action at Build

Azure Sphere is also taking center stage at Build during Satya Nadella’s keynote this week. Microsoft customer and fellow Seattle-area company Starbucks will showcase how it is testing Azure IoT capabilities and guardian modules built on Azure Sphere within select equipment to enable partners and employees to better engage with customers, manage energy consumption and waste reduction, ensure beverage consistency, and facilitate predictive maintenance. The company’s solution will also be on display in the Starbucks Technology booth.

Announcing new Azure IoT Edge innovations

Today, we are announcing the public preview of Azure IoT Edge support for Kubernetes. This enables customers and partners to deploy an Azure IoT Edge workload to a Kubernetes cluster on premises. We’re seeing Azure IoT Edge workloads being used in business-critical systems at the edge. With this new integration, customers can use the feature-rich and resilient infrastructure layer that Kubernetes provides to run their Azure IoT Edge workloads, which are managed centrally and securely from Azure IoT Hub. Watch this video to learn more.

Additional IoT Edge announcements include:

  • Preview of Azure IoT Edge support for Linux ARM64 (expected to be available in June 2019).
  • General availability of IoT Edge extended offline support.
  • General availability of IoT Edge support for Windows 10 IoT Enterprise x64.
  • New provisioning capabilities using x.509 and SaS token.
  • New built-in troubleshooting tooling.

A common use case for IoT Edge is transforming cameras into smart sensors to understand the physical world and enable a digital feedback loop: finding a missing product on a shelf, detecting damaged goods, etc. These examples require demanding computer vision algorithms to deliver consistent and reliable results, large-scale streaming capabilities, and specialized hardware for faster processing to provide real-time insights to businesses. At Build, we’re partnering with Lenovo and NVIDIA to simplify the development and deployment of these applications at scale. With NVIDIA DeepStream SDK for general-purpose streaming analytics, a single IoT Edge server running Lenovo hardware can process up to 70 channels of 1080P/30FPS H265 video streams to offer a cost-effective and faster time-to-market solution.

This summer, NVIDIA DeepStream SDK will be available from the IoT Edge marketplace. In addition, Lenovo’s new ThinkServer SE350 and GPU-powered “tiny” edge gateways will be certified for IoT Edge.

Announcing Mobility Services through Azure Maps

Today, an increasing number of apps built on Azure are designed to take advantage of location information in some way.

Last November, we announced a new platform partnership for Azure Maps with the world’s number-one transit service provider, Moovit. What we’re achieving through this partnership is similar to what we’ve built today with TomTom. At Build this week, we’re announcing Azure Maps Mobility Services, which will be a set of APIs that leverage Moovit’s APIs for building modern mobility solutions.

Through these new services, we’re able to integrate public transit, bike shares, scooter shares, and more to deliver transit route recommendations that allow customers to plan their routes leveraging the alternative modes of transportation, in order to optimize for travel time and minimize traffic congestion. Customers will also be able to access real-time intelligence on bike and scooter docking stations and car-share-vehicle availability, including present and expected availability and real-time transit stop arrivals.

Customers can use Azure Maps for IoT applications—or any application that uses geospatial or location data, such as apps for field service, logistics, manufacturing, and smart cities. Retail apps may integrate mobility intelligence to help customers access their stores or plan future store locations that optimize for transit accessibility. Field services apps may guide employees from one customer to another based on real-time service demand. City planners may use mobility intelligence to analyze the movement of occupants to plan their own mobility services, visualize new developments, and prioritize locations in the interests of occupants.

You can stay up to date about how Azure Maps is paving the way for the next generation of location services on the Azure Maps blog, and if you’re at Build this week, be sure to visit the Azure Maps booth to see our mobility and spatial operations services in action.

Simplifying development of robotic systems with Windows 10 IoT

Microsoft and Open Robotics have worked together to make the Robot Operating System (ROS) generally available for Windows 10 IoT. Additionally, we’re making it even easier to build ROS solutions in Visual Studio Code with upcoming support for Windows, debugging, and visualization to a community-supported Visual Studio Code extension. Read more about integration between Windows 10 IoT and ROS.

Come see us at Build

If you’re in Seattle this week, you can see some of these new technologies in our booth, and even play around with them at our IoT Hands-on Lab. I’ll also be hosting a session on our IoT Vision and Roadmap. Stop by to hear more details about these announcements and see some of these exciting new technologies in action.

Advancing the developer experience for serverless apps with Azure Functions

$
0
0

Azure Functions constantly innovates so that you can achieve more with serverless applications, enabling developers to overcome common serverless challenges through a productive, event-driven programming model. Some releases we made in the last few weeks are good examples of this, including:

The new releases and improvements do not stop there, and today we are pleased to present several advancements intended to provide a better end-to-end experience when building serverless applications. Keep reading below to learn more about the following:

  • A new way to host Azure Functions in Kubernetes environments
  • Stateful entities with Durable Functions (in preview)
  • Less cluttered .NET applications with dependency injection
  • Streamlined deployment with Azure DevOps
  • Improved integration with Azure API Management (in preview)

Bring Azure Functions to Kubernetes with KEDA

There’s no better way to leverage the serverless advantages than using a fully managed service in the cloud like Azure Functions. But some applications might need to run on disconnected environments, or they require custom hardware and dependencies. Customers usually take a containerized approach for these scenarios, in which Kubernetes is the de facto industry standard. Managing application-aware, event-driven scale in these environments is non-trivial and usually insufficient, as it’s based only on resource usage, such as CPU or memory.

Microsoft and Red Hat partnered to build Kubernetes-based event-driven auto scaling (KEDA). KEDA is an open source component for Kubernetes that provides event-driven scale for any container workload enabling containers to scale from zero to thousands of instances based on event metrics, such as the length of an Azure Queue or Kafka stream, and back to zero again when done processing.

Since Azure Functions can be containerized, you can now deploy a Function App to any Kubernetes cluster, keeping the same scaling behavior you would have on the Azure Functions service.

Kubernetes cluster diagram showing journey from CLI to external trigger source

This is a significant milestone for the open source ecosystem around Kubernetes, so we’re sharing much more detail in a separate blog post titled, “Announcing KEDA: bringing event-driven containers and functions to Kubernetes.” If you want to learn more about it, register today for the Azure webinar series scheduled for later in May. In this webinar we will go more in depth on this exciting topic.

Durable Functions stateful patterns

We have been thrilled with the excitement and energy from the community around Durable Functions, and our extension to the Functions runtime that unlocks new stateful and workflow patterns for serverless workflows. Today we are releasing some new capabilities in a preview package of Durable Functions.

For stateful functions that map to an entity like an IoT device or a gaming session, you can use the new stateful entity trigger for actor-like capabilities in Azure Functions. We are also making the state management of your stateful functions more flexible with preview support for Redis cache as the state provider for Durable Functions, enabling scenarios where applications may run in a disconnected or edge environment.

You can learn more about the new durable features in our documentation, “Durable Functions 2.0 preview (Azure Functions).”

Dependency injection for .NET applications

We are constantly striving to add new patterns and capabilities that make functions easier to code, test, and manage. .NET developers have been taking advantage of dependency injection (DI) to better architect their applications, and today we’re excited to support DI in Azure Functions written in .NET. This enables simplified management of connections plus dependent services, and unlocks easier testability for functions that you author.

Learn more about dependency injection in our documentation, “Use dependency injection in .NET Azure Functions.”

Streamlined Azure DevOps experience

With new build templates in Azure Pipelines, you will have the ability to quickly configure your Azure Pipeline with function-optimized tasks to build your .NET, Node.js, and Python applications. We are also announcing the general availability of the Azure Functions deployment task, which is optimized to work with the best deployment option for your function app.

Configuring Azure Pipelines with function-optimized tasksDeploying a function app to Azure Functions

Additionally, with the latest Azure CLI release we introduced a new command that can automatically create and configure an Azure DevOps pipeline for your function app. The DevOps definition now lives with your code, which allows to fine tune build and deployment tasks.

For more detailed information please check our documentation, “Continuous delivery using Azure DevOps.”

Defining and managing your Functions APIs with serverless API Management

We have also simplified how you can expose and manage APIs built with Azure Functions through API Management. With this improved integration, the Function Apps blade in the Azure portal presents an option to expose your HTTP-triggered functions through a new or an existing API in API Management.

Linking API Management dashboard to Function Apps

Once the Function App is linked with API Management, you can manage API operations, apply policies, edit and download OpenAPI specification files, or navigate to your API Management instance for a full-featured experience.

Learn more about how to expose your Function Apps with API Management in our documentation.

Sharing is caring

We have also included a set of improvements to the Azure Serverless Community Library, including an updated look, a streamlined sample submission process, and more detailed information about each sample. Check out the Serverless Community Library to gain inspiration for your next serverless project, and share something cool once you’ve built it.

Get started today

With Functions options expanding and quickly improving, we’d sincerely love to hear your feedback. You can reach the team on Twitter and on GitHub, and we also actively monitor StackOverflow and UserVoice. For the latest updates, please subscribe to our monthly live webcast.

Tell us what you love about Azure Functions, and start learning more about all the new capabilities we are presenting today:

Azure App Service update: Free Linux Tier, Python and Java support, and more

Planet scale operational analytics and AI with Azure Cosmos DB

$
0
0

Title card, Planet Scale Opperational Analytics with Azure Cosmos DB.

We’re excited to announce new Azure Cosmos DB capabilities at Microsoft Build 2019 that enable anyone to easily build intelligent globally distributed apps running at Cosmos scale:

  • Planet scale, operational analytics with built-in support for Apache Spark in Azure Cosmos DB
  • Built-in Jupyter notebooks support for all Azure Cosmos DB APIs

See the other Azure Cosmos DB announcements and hear voices of our customers.

Built-in support for Apache Spark in Azure Cosmos DB

Map of Apache Spark locations.

Our customers love the fact that Azure Cosmos DB enables them to elastically scale throughput with guaranteed low latency worldwide, and they also want to run operational analytics directly against the petabytes of operational data stored in their Azure Cosmos databases. We are excited to announce the preview of native integration of Apache Spark within Azure Cosmos DB. You can now run globally distributed, low latency operational analytics and AI on transactional data stored within your Cosmos databases. This provides the following benefits:

  • Fast time-to-insights with globally distributed Spark. With the native Apache Spark support on your multi-mastered globally distributed Cosmos database, you can now get blazing fast time-to-insight all around the world. Since your Cosmos database is globally distributed, all the data is ingested and queries are served against the local database replica closest to both the producers and the consumers of data, all around the world.
  • Fully-managed experience and SLAs. Apache Spark jobs enjoy the industry leading comprehensive 99.999 SLAs offered by Azure Cosmos DB without any hassle of managing separate Apache Spark clusters. Azure Cosmos DB automatically and elastically scales the compute required to execute your Apache Spark jobs across all Azure regions associated with your Cosmos database.
  • Efficient execution of Spark jobs on multi-model operational data. All your Spark jobs are executed directly on the indexed multi-model data stored inside the data partitions of your Cosmos containers without requiring any unnecessary data movement.
  • OSS APIs for transactional and analytical data processing. Along with using the familiar OSS client drivers for Cassandra, MongoDB, and Gremlin (along with the Core SQL API) for your operational workloads, you can now use Apache Spark for your analytics - all operating on the same underlying globally distributed data stored in your Cosmos database.

We believe that the native integration of Apache Spark into Azure Cosmos DB bridges the transactional and analytic divide that has been one of the major customer pain points building cloud-native applications at global scale.

Several of Azure’s largest enterprise customers are running globally distributed operational analytics on their Cosmos databases with Apache Spark. Coca-Cola is one such customer, watch their story.

An image of Barry Simpson, SVP & Global Chief Information Officer for the Coca-Cola Company.

Coca-Cola using Azure Cosmos DB for globally-distributed operational analytics

“Being able to scale globally and have insights that are actually delivered within minutes at a global scale is very important for us. Putting our data in a service like Azure Cosmos DB allows us to draw insights across the world much faster, going from the  hours that we used to take a couple years ago, down to minutes.”
-    Neeraj Tolmare, CIO, Global Head of Digital & Innovation at The Coca-Cola Company

Explore more of the Azure Cosmos DB API for Apache Spark.

Cosmic notebooks

We are also thrilled to announce the preview of Jupyter notebooks running inside Azure Cosmos DB, made available for all APIs (including Cassandra, MongoDB, SQL, Gremlin and Apache Spark) to further enhance the developer experience on Azure Cosmos DB. With the native notebook experience support for all Azure Cosmos DB APIs and all data models, developers can now interactively run queries, execute ML models, explore and analyze the data stored in their Cosmos databases. The notebook experience also enables easy exploration with the stored data, building and training machine learning models, and performing inferencing on the data using the familiar Jupyter notebook experience, directly inside the Azure portal.

Learn more about Jupyter notebooks.

 A Jupyter notebook.
Built-in support for Jupyter notebooks in Azure Cosmos DB

We also announced a slew of new capabilities and improvements for developers, including a new API for etcd offering native support for Azure Cosmos DB backed etcd to power your self-managed Kubernetes clusters on Azure, support for OFFSET/SKIP to our SQL APIs and other SDK improvements.

We are extremely grateful to our customers, who are building amazingly cool, globally distributed apps and trusting Azure Cosmos DB with their mission critical workloads at massive scale. Their stories inspire us.

Host multiplayer Minecraft: Education Edition on Azure Virtual Machines

$
0
0

Microsoft education story

 

The creative nature of Minecraft has made it one of the premier educational tools for the modern classroom. Teachers around the world have designed, modified, and explored collaborative Minecraft projects for all subjects, and with Minecraft: Education Edition it has become even easier for teachers to spin up multiplayer servers right from their own machines and lead their classes in collaborative building and problem solving.

We're excited to announce the new Minecraft: Education Edition virtual machine (preview) is available on the Azure Marketplace. This release allows teachers to run multi-player Minecraft: Education Edition sessions with the scalability, performance, and security of Azure. Students need only log-in with their school-issued email address to join the learning! And institutions that have a Minecraft: Education Edition license through select Microsoft 365 Education plans need only pay only what they use on the virtual machine itself.

Running Minecraft: Education Edition on Azure can provide teachers more flexibility and control over the learning experience. Many teachers may not have a personal or organized-issued devices that can host large multi-player sessions. Or they aren’t able to leave the multiplayer instance open for students to connect from home, making the environment only accessible during class hours. This is just the first step in pairing experiences in education like Minecraft: Education Edition with the Azure cloud – we invite you to share your thoughts and feedback.

Azure provides $200 credit and a free tier of services (including virtual machines) for educators and IT administrators with Azure Free Account. Students can also receive $100 credit and a free tier of services without requiring a credit card through academic verification with Azure for Students.

Learn more about Minecraft: Education Edition, Microsoft 365 Education, and Azure in Education to try out this new offering in your classroom. Or, share this information with your local schools!

In just a couple steps, teachers can start a multiplayer Minecraft: Education Edition server and students can connect from anywhere, anytime.

If you couldn’t visit us at Microsoft Build 2019 and try out the multiplayer experience for yourself, check out the DYI site for more details.

We are excited to launch a pilot with a few current Minecraft: Education Edition teachers to try out this new Azure VM experience.

If you are a teacher or school IT leader interested in partnering with us to improve this Minecraft: Education Edition experience, please message me (Sarah Guthals on LinkedIn). We look forward continuing partnerships in education! 

Build with Azure IoT Central and IoT Plug and Play

$
0
0

We’ve made it our mission to provide powerful yet simple-to-use IoT offerings across cloud and edge, so that our partners and customers can quickly move from idea, to pilot, and then production without the need for deep expertise. Azure IoT Central and IoT Plug and Play are at the center of our quest to simplify the IoT journey so that any customer, no matter where they’re starting from, can quickly and easily create trusted, connected solutions.

Build with Azure IoT Central to integrate new streams of data into your business processes

Azure IoT Central is a fully managed IoT solution that makes it easy to connect, monitor, and manage your IoT devices and products. Azure IoT Central simplifies the initial setup of your IoT solution and reduces the management burden, operational costs, and overhead of a typical IoT project.

A growing population of developers and partners are discovering the powerful ability to build with Azure IoT Central. This approach to building allows you to apply your energy, creativity, and unique domain expertise to solving customer needs and creating business value, while also leaving the dirty work of managing a global scale solution with high availability and disaster recovery to Microsoft. You can create your own finished, branded SaaS product in a fraction of the time while we handle the difficult aspects of operating, managing, securing, and scaling an IoT solution.

New capabilities in Azure IoT Central

Recently, we added features to allow for easier, more seamless integration between Azure IoT Central and a customer’s existing enterprise and line of business solutions, including insights and actions.

  • Rules and alerts: The powerful rules engine, backed by Azure Stream Analytics, enables monitoring and alerts on any events that affect device health and usage. It delivers clean and consistent data to the line of business apps by integrating with Azure Functions, Microsoft Flow, and Logic Apps.
  • Visualization: Multiple dashboards and data visualization options are available for different types of users. Instead of using a one-size-fits-all approach, application administrators can now create custom views for different types of users.
  • Connectors: Inbound and outbound data connectors allow operators to integrate with third-party systems. Use the device bridge to ingest data from other clouds into an Azure IoT Central application. For example, the continuous data export feature brings data into downstream business applications.
  • Personalization: Add custom branding and operator resources to an Azure IoT Central application with new white labeling options for a better visual fit with your organization’s other applications.
  • Device management: Manage devices at scale with ease. Copy a job created earlier, save a job to finish later, stop or resume a running job, and download a job details report after completion. The new Device Template library makes it easier to onboard and model devices, while new device authoring separates the template from the device and simplifies the overall experience.

Enabling innovation across industries

Customers building with Azure IoT Central today include Eaton, a global power management company. Eaton provides energy-efficient solutions to effectively manage electrical, hydraulic, and mechanical power more efficiently, safely, and sustainably. Eaton is using Azure IoT Central to enable easy application development for its industry-first Energy Management Circuit Breaker (EMCB). This is a next-generation “smart breaker” with the safety functionality of a standard circuit breaker and the cloud-connectivity and on-board intelligence built in to support grid optimization. It is a significant transformation of circuit breaker technology and offers revenue-grade branch circuit metering, communications capabilities, and remote access.

Sagegreenlife is also using Azure IoT Central, in partnership with Cradlepoint, to ensure a superior customer experience. Sagegreenlife creates modular, scalable, and flexible living walls that infuse living design into modern environments. Living design brings our environments to life by combining technology, science, and design to incorporate natural elements into homes, offices, schools, stores, hospitals, and other daily structures. Azure IoT Central allows Sagegreenlife to improve operational efficiency and resolve issues before they have an impact on the plants.

By removing layers of complexity and integrating with your existing solutions and business processes, Azure IoT Central provides organizations with the simplicity they need to create the next wave of innovation in IoT. Azure IoT Central is open for business and ready for you to join the thousands of developers who are using it now.

Have ideas or suggestions for new features? Post it on UserVoice. If you’ve already started exploring Azure IoT Central, please consider taking this survey so we can hear your feedback.

Use IoT Plug and Play to connect devices that integrate seamlessly with your solution

The start of any IoT journey involves connecting things and acquiring data. This requires bringing together disciplines in cloud development, hardware procurement, embedded development, and systems integration so it can be a daunting first step. Such a process often results in prolonged project timelines and can in many cases, delay or even block the successful completion of an IoT digital transformation. Our goal is to provide integration between off-the-shelf devices and cloud solutions to make completion of this critical step much faster. We recently announced IoT Plug and Play, which is based on an open modeling language that allows IoT devices to declare their capabilities.

Simplified IoT

IoT Plug and Play allows solution developers to integrate devices without writing any embedded code. At the center of IoT Plug and Play is a schema that describes device capabilities. We refer to this as a device capability model, which is a JSON-LD document. It’s structured as a set of interfaces comprised of properties (attributes like firmware version, or settings like fan speed), telemetry (sensor readings such as temperature, or events such as alerts), and commands the device can receive (such as reboot). Interfaces can be reused across device capability models to facilitate collaboration and speed development.

Open collaboration

To make IoT Plug and Play work seamlessly with Azure Digital Twins, we have unified the IoT Plug and Play schema with our upcoming Digital Twin Definition Language (DTDL). IoT Plug and Play and the DTDL are open to the community, and Microsoft welcomes collaboration with customers, partners, and the industry. Both are based on open W3C standards such as JSON-LD and RDF, which allows for easier adoption across services and tooling. Additionally, there is no extra cost for using IoT Plug and Play and DTDL. Standard rates for IoT Hub, Azure IoT Central, and other Azure services will remain the same.

All solutions built on Azure IoT, including Azure IoT Central, will support IoT Plug and Play. Today, Azure IoT Central supports modeling properties, telemetry, commands, and semantic types. These will now be formalized into IoT Plug and Play device models. Compal GT100 tracker, for example, is enabled with IoT Plug and Play and shows telemetry in Azure IoT Central.

Compal GT100 tracker enabled with IoT Plug and Play

Compal GT100 tracker enabled with IoT Plug and Play

Launch partners

IoT Plug and Play will be available for preview later this year. We’ve worked with a large set of launch partners to ensure their hardware is certified-ready for IoT Plug and Play. Browse the IoT Plug and Play certified-ready devices in the catalog and download their capability models, or contact the following partners to learn more about their product offerings.

Partners offering IoT Plug and Play certified-ready devices

Partners offering IoT Plug and Play certified-ready devices

If you’d like to learn how to participate ahead of the public launch, we’d recommend that you review the definition language available today in GitHub and provide feedback directly. Encourage your close device partners to learn more about IoT Plug and Play and to participate in Azure Certified for IoT ahead of the public launch.

Next steps

To learn more about the new features in Azure IoT Central or IoT Plug and Play, check out the following resources.


Open Robotics and Microsoft release ROS on Windows 10 IoT Enterprise

$
0
0

From tiny toys on supermarket shelves to building-sized material haulers, today’s robots come in all shapes and sizes. And thanks to a range of advancements in their components and technologies, they are also becoming more capable and cost effective.

Robots may be the ultimate intelligent edge device. A robot needs to observe the world using many sensors, and reason about what it has observed in order to develop a plan of action. It then needs to perform those actions quickly and safely, often with limited internet connectivity.

One of the most popular frameworks for building that complex functionality is the Robot Operating System (ROS) maintained by Open Robotics, a mature, open source robotics framework used worldwide for commercial and research applications. ROS’ interoperability, body of samples, and community make it valuable for building an automated solution.

Last fall at the ROSCon 2018 conference in Madrid, we announced an experimental release of ROS for Windows. Since then, we’ve been working with Open Robotics to build out support for ROS. This week at the Microsoft Build conference in Seattle, we are pleased to announce the culmination of those efforts: ROS is now generally available on Windows 10.

Windows 10 IoT Enterprise provides the full power of Windows 10, packaged to meet the needs of IoT and intelligent edge devices. It shares all the benefits of the worldwide Windows ecosystem—a rich device platform, world-class developer tools, integrated security, long-term support and a global partner network.

“We’re excited to add Windows IoT as a supported platform for ROS. The ROS developer community can now take advantage of a wide array features in Windows IoT, including hardware-accelerated machine learning, computer vision and cloud capabilities such as Azure Cognitive Services. I look forward to seeing the next generation of Windows IoT-supported ROS applications.” — Brian Gerkey, CEO of Open Robotics.

With support for ROS, the Windows platform now provides a fast, safe, smart and manageable foundation for robotics solutions that also allows developers to do more at the edge using machine learning capabilities and all the scalability and power of Azure IoT:

  • Microsoft Azure Cognitive Services provides AI solutions that can infuse robots with intelligent algorithms to see, hear, speak, understand and interpret their environments using natural methods of communication.
  • The Microsoft ROS Node for Azure IoT Hub allows a system administrator to monitor the health of a robot and its tasks by monitoring specific message streams.
  • The Microsoft AI platform can act as the brain of the robot, with inferencing capabilities that work across any hardware platform. Using the industry standard ONNX model format trained locally or in the cloud, developers can accelerate machine learning at the edge—meaning the robot can run the models itself without consuming expensive bandwidth transmitting images to the cloud.

These capabilities add to the thousands of behaviors, skills and drivers already developed by the ROS community that can be composed to create the mind of a robot. With the core of ROS enabled on Windows 10 IoT Enterprise, many of these components can be made available to Windows with minimal effort.

Additionally, Microsoft will soon be adding functionality to a community-supported Visual Studio Code extension—adding support for Windows, debugging and visualization to enable easier development for ROS solutions.

I’m hosting a session at Build on Wednesday along with Principal Program Manager Lead James Coliz, so be sure to stop by if you want to learn more and see some of these technologies in action. To get started with ROS on Windows now, please visit http://aka.ms/ros.

And to learn more about what Microsoft is doing at Build this week when it comes to IoT, see Sam George’s roundup of all the goings-on.

The post Open Robotics and Microsoft release ROS on Windows 10 IoT Enterprise appeared first on Windows Developer Blog.

Microsoft Azure IoT Device Agent V2 general availability

$
0
0

We are excited to announce the general availability of Microsoft Azure IoT Device Agent V2. The Microsoft Azure IoT Device Agent is an open-source, ready-to-build and package solution for Windows 10 IoT Enterprise and Windows 10 IoT Core operating systems that provides you with built-in capabilities to remotely provision, configure, monitor and manage your IoT devices.

Flow chart showing Azure IoT Services
When it comes to the widespread deployment of IoT devices, one of the key challenges is around remote manageability. IoT devices are deployed out in the field or on factory floors where direct device access is not always possible or practical. As an operator, you also want to ensure that you can deploy software and security updates across all the IoT devices in the field to protect and secure the data as well as the device.

What’s new

The Microsoft Azure IoT Device Agent has been in public preview since Jan 2019. If you were using our public preview version, we hope you found it useful and we are eager to know your feedback. With GA, you now have access to the following new features:

1) UWP Management Plugin and UWP bridge: You can now remotely manage your UWP app running on your devices using the UWP management plugin. The Device Agent now includes a UWP bridge which allows UWP applications to leverage a subset of the Device Agent capabilities such as TPM access and reboot.

2) Diagnostics and error reporting: Troubleshooting issues on your IoT devices is a lot easier with quick access to collect logs and error information from the device and upload them to Azure Storage.

3) Device Agent extensibility: Ready-to-use Visual Studio templates and Nugets make it easy for a device developer to create their own plugins and extend the Device Agent’s capabilities.

See the complete list of capabilities below for more information.

Microsoft Azure IoT Device Agent capabilities

The Microsoft Azure IoT Device Agent provides you with the following capabilities to support these scenarios:

  • Device provisioning: The Microsoft Azure IoT Device Agent integrates with the Azure Device Provisioning Service (DPS) client SDK to automatically provision and create the Azure IoT Hub identities of the device using the Azure Device Provisioning Service. The Device Agent registers the device with the Azure IoT Hub, ensuring the identity assignment of the device and applications running on it. The Microsoft Azure IoT Device Agent leverages Azure IoT Module twin such that each process that connects to the IoT Hub is associated to the same IoT Hub device. If your solution demands that you have a custom provisioning client, you can do so by downloading the Azure Device Provisioning Service device SDK and implementing your own provisioning service.
  • Manage cloud connectivity: Once the Microsoft Azure IoT Device Agent establishes the connection with the IoT Hub, it continues to manage the Azure cloud connection and the renewal of the SAS token before it expires.
  • Remote device management: The Microsoft Azure IoT Device Agent provides the following remote device management capabilities via built-in plugins:
    1. Device info: Enables you to retrieve information like Device ID, manufacturer information, firmware version, etc.
    2. Reboot: Enables you to remotely reboot the device or schedule a reboot.
    3. Remote wipe: Enables you to wipe out all data on the device and restore the device to a clean state.
    4. Factory reset: Enables you to apply a factory image on the device bringing it back to its original state.
    5. Time management: Enables you to configure an NTP service and set the time zone on the device.
    6. Windows Update management: Enables you to enforce OS update policies on the device.
    7. Certificate management: Enables you to remotely install or uninstall certificates.
    8. Windows telemetry management: Enables you to configure the level of telemetry that is being reported from the device.
  • Device Agent’s extensibility: As a device builder, you can write your own plugins which can interface  with the Microsoft Azure IoT Device Agent. The Device Agent makes it easy for you to build your own custom code by providing you with the following hooks:
    1. Includes a plugin model to bridge platform components with the agent and consequently with IoT Hub. The plugin model enables discovery, initialization, error reporting and state aggregation. The model includes a Visual Studio template and Nugets, which help you to quickly create your own plugin while abstracting the communication between the Device Agent and your plugin.
    2. When a handler has dependencies on other handlers, Azure IoT Device Agent can make sure that the dependencies are processed in the right order.
    3. Handlers have a versioning model to prevent mismatches in twin schemas or mismatches between plugin versions and the Device Agent.
  • Remote UWP application management: As a solution operator you can now remotely deploy, update, remove, start, or stop UWP applications on your devices using the UWP management plugin. On Windows IoT Core, you can also designate an application to be the start-up application.
  • Device Agent plugin capabilities in UWP apps: UWP applications can also leverage the UWP bridge, which enables the UWP application to leverage a limited subset of the functionalities of the Device Agent such as retrieving DPS enrollment information from the TPM or accessing admin-privileged functionality such as reboot.
  • Diagnostics and error reporting: Errors and diagnostics logs are now easily accessible for troubleshooting purposes using the Microsoft Azure IoT Device Agent.
    1. Solution operators can now collect ETW logs on their IoT devices and upload them back to the cloud for inspection.
    2. The error reports have been enhanced to now include details of the sub-system or the process where the error originally occurred helping users to better interpret error messages.

Getting started

Check out the Quick Start for more information or download the code or exe from this GitHub repo.

We want to hear from you!

As always, your feedback is very important to us! Please share your comments, questions, or concerns on our   or comment below. We’re listening!

 

The post Microsoft Azure IoT Device Agent V2 general availability appeared first on Windows Developer Blog.

Modernizing Windows CE systems with Windows 10 IoT

$
0
0

Microsoft has provided platforms and operating systems for embedded devices for decades. As new offerings such as Windows 10 IoT have become available, our customers and partners are increasingly interested in the advanced security, platform and cloud connectivity features that these OSes provide. Customers moving from most earlier editions of Windows, like Windows XP and Windows 7, can do so with little effort because of binary-compatible applications. Other operating systems, like Windows CE, require device builders to modify source code. Porting applications like this can be challenging.

To help these customers move to Windows 10 IoT and harness the full power of the intelligent edge including artificial intelligence and machine learning, Microsoft is developing technology that will allow most customers to run their existing, unmodified Windows CE applications on Windows 10 IoT while they continue to invest in updating their applications. Today at the Microsoft Build conference, we are sharing preliminary information on this CE migration technology and asking customers to give us feedback by registering at the link below.

How simplified CE migration works
The CE migration technology used on Windows 10 employs pico process technology to run applications in an isolated environment, with the application’s OS dependencies decoupled from the underlying host OS. Pico process technology is used in Windows Subsystem for Linux, which allows Linux distributions to run on Windows, and SQLPAL, which allows SQL Server to run on a Linux host.

The entire Windows CE environment, both user mode and kernel mode, is lifted into the pico process, which runs in the user mode of the underlying Windows 10 IoT OS. A Windows 10 platform abstraction layer handles syscalls (e.g., virtual memory allocations) from the pico process and delivers them to the Windows 10 host OS for processing. You can learn more about how this technology works in the recent IoT Show episode “Modernizing Windows CE Devices.”

Work with Microsoft to simplify CE migration
Over the coming months, we want to gather input from developers to help us better understand the requirements for this CE migration technology and determine how to bring it to our customers.

If you have a Windows CE solution that you want to move forward to Windows 10 IoT, please register your interest here and attend the “Windows IoT: The Foundation for Your Intelligent Edge” session at Build on May 8 at 2 p.m. PT.

The post Modernizing Windows CE systems with Windows 10 IoT appeared first on Windows Developer Blog.

What’s new with Azure Pipelines

$
0
0

Azure Pipelines, part of the Azure DevOps suite, is our Continuous Integration and Continuous Delivery (CI and CD) platform, used every day by large enterprises, individual developers, and open source projects. Today, we’re thrilled to announce new features for Azure Pipelines, including some much-requested ones:

  • Multi-stage YAML pipelines (for CI and CD)
  • Environments and deployment strategies
  • Kubernetes support

Multi-stage YAML pipelines

One of our biggest customer requests since launching YAML support for Build pipelines (CI) has been to support it for Release pipelines (CD) as well. To accomplish this, we now offer a unified YAML experience, so you can configure each of your pipelines to do CI, CD, or CI and CD together. Defining your pipelines using YAML documents allows you to check the configuration of your CI/CD into source control together with your application’s code, for easy management, versioning, and control.

Multi-stage pipelines view

With our new YAML support, we’re also bringing a new UI to help visualize all of your multi-stage pipelines across the product, whether you’re in the run summary view, looking at all your pipeline runs, or browsing logs.

View all runs

In addition to our new pipelines pages, we have a new log viewing experience as well. This lets you easily jump between stages and jobs along with helping you quickly identify errors and warnings.

New logs view

This feature will be rolled out for all accounts over the next few days. To enable it, go to the preview features page and turn on the toggle for “Multi-stage pipelines”.

Getting going with YAML

We want you to be able to get going fast wherever your code lives. Once you connect your repo, whether it’s on GitHub, Azure Repos, or your own external Git source, we’ll analyze your code and recommend a YAML template that makes sense for you and gets you up and running quickly.

Configure pipeline from template

While we want to get you running quickly, we know you’re going to want to keep configuring and updating your YAML. To help make it even easier to edit and update your pipeline, we’ve created an in-product editor with IntelliSense smart code completion, and an easy task assistant.

YAML editor with IntelliSense

Building your first multi-stage pipeline with environments

Bringing CD to YAML means a bunch of great additions in terms of commands and functionality. Let’s cover the basics with a simple pipeline that just builds and deploys an app in two stages.

stages:
- stage: Build
  jobs:
  - job: Build
    pool:
      vmImage: 'Ubuntu-16.04'
    continueOnError: true
    steps:
    - script: echo my first build job
- stage: Deploy
  jobs:
    # track deployments on the environment
  - deployment: DeployWeb
    pool:
      vmImage: 'Ubuntu-16.04'
    # creates an environment if it doesn’t exist
    environment: 'smarthotel-dev'
    strategy:
      # default deployment strategy
      runOnce:
        deploy:
          steps:
          - script: echo my first deployment

If we ran this pipeline, it would execute a first stage, Build, followed by a second stage, Deploy. You are free to create as many stages as you wish, for example to deploy to staging and pre-production environments.

You may notice two new interesting concepts in here if you’re familiar with our YAML schema. And if this is the first time you’re seeing our YAML, you can read up on the core concepts here.

The first new keyword is environment. Environments represent the group of resources targeted by a pipeline, for example, Kubernetes clusters, Azure Web Apps, virtual machines, and databases. Environments are useful to group resources, for example under “development”, “staging”, “production”, etc, and you can define them freely. Defining and using an environment unlocks all kinds of capabilities, for example:

  • Traceability of commits and work items
  • Deployment history down to the individual resource
  • Deeper diagnostics, and (soon) approvals and checks

There’s a lot of great new functionality available today in preview, and even more coming around the corner. You can learn more on environments here

Kubernetes environments

You will also notice the strategy keyword. This impacts the deployment strategy, which defines how your application is rolled out across the cluster. The default strategy is runOnce, but in the future you’ll be able to easily indicate other strategies, such as canary or blue-green.

If you’re ready to start building, check out our documentation for building a multi-stage pipeline with environments. If you want to see some multi-stage pipeline templates to work off of, take a look at out our templates repo. You can even see those sample pipelines in action inside of our samples project.

Kubernetes

If you have an app which has been containerized (ie. there is a Dockerfile in the repository), we want to make it easy for you to setup a pipeline in less than 2 minutes, to build and deploy to a Kubernetes cluster (including Azure Kubernetes Service). Wrapping your head around Kubernetes can be hard, so we’re making it easy to both get started and continue deploying to your Kubernetes clusters. For more details, read our post on Azure Pipelines and Kubernetes.

Kubernetes is fully integrated with Azure Pipelines environments too. This lets you view all the deployments, daemonsets, etc, running on Kubernetes in each environment, completed by insights such as readiness and liveness probes of pods. You can use this information and pod-level details (including logs, containers running inside pods, and image metadata) to effectively diagnose and debug any issue, without requiring direct access to the cluster itself.

Kubernetes environments

A look at what’s next

In addition to the preview features that now available, there are so many exciting things just around the corner for Azure Pipelines we want to share:

  • Caching – We’ll be announcing the availability of another much-requested feature very shortly: caching to help your builds run even faster.
  • Checks and approvals – We’re improving multi-stage pipelines with the ability to set approvals on your environments, to help control what gets deployed when and where. We’ll keep iterating here to deliver more experiences with checks to help gating your multi-stage pipelines.
  • Deployment strategies – We’re adding additional deployment strategies as part of the deployment job type, like blue-green, canary and rolling, to better control how your applications are deployed across distributed systems.
  • Environments – We’re adding support for additional resource types in environments, so you can get going quickly with virtual machines (through deployment groups) and Azure Web Apps.
  • Mobile – With our new UX, we’re going to start to enable new mobile views in Q2 to help you view the status of pipelines, quickly jump into logs, and complete approvals.
  • Pipeline analytics – We’re continuing to grow our pipeline analytics experiences to help you get an all-up picture of the health of your pipelines, so you can know where to go in and dig deeper.
  • Tests and code coverage – We’re going to be shipping all new test and code coverage features and UX in the next months.

Thank you

Lastly, we want to thank all of our users who have adopted Azure Pipelines. Since our launch last September, we have seen tremendous growth, and we are particularly excited about the response from the open source developer community. It’s been an honor to see Azure Pipelines badges on so many open sources projects we love and use regularly ourselves. In the first eight months, public repositories have already used over 88 years of build time for Azure Pipelines for free. Check out Martin’s post for some more stats and stories from open source projects. We’ve also received so much great feedback from project maintainers to date and we can’t thank the community enough.

If you’re new to Azure Pipelines, get started for free on our website and learn more about what you can do with Azure Pipelines.

We’ll be posting more deep-dive information in the coming weeks, but as always we’d love to hear your feedback and comments. Feel free to comment on this post or tweet at @AzureDevOps with your thoughts.

The post What’s new with Azure Pipelines appeared first on Azure DevOps Blog.

Introducing Azure Boards to the GitHub Marketplace

$
0
0

With the adoption of Agile and DevOps practices into your team comes a wealth of autonomy and flexibility to develop the features that matter for your customers and own them through the development cycle, into production, and back again. However, when it comes to staying aligned and coordinated with the other teams that deliver your product or even with other products within your organization, the wonderful social collaboration within your team’s repositories starts to run into scale challenges. This is where Azure Boards can help, especially given that it’s now available from within the GitHub Marketplace.

Stay aligned and coordinated with Azure Boards

Beyond helping you to track work for your team with Kanban boards, backlogs, dashboards, and reporting, Azure Boards also helps you tie your work to the higher-level priorities of your organization and stay coordinated by tracking dependencies between your work and that of other teams. Plus, when it comes to staying aligned and being able to ensure traceability with the development activity, Azure Boards has strong integration with GitHub to link commits, pull request, and now issues.

Link code activity and issues from GitHub

Linking the commits and pull requests to your work items in Azure Boards ensures that context is always just a click away. Whether you’re browsing through the active work for the team within a board in Azure Boards or reviewing a pull request, you can quickly see where development is at and what other information is available for a given change. You can establish links either manually within the work item using the appropriate GitHub URL or use the AB#[Work Item ID] mention syntax within your GitHub commit message, pull request title or description.

Work item mention in GitHub issue description

With the ability to also link to issues in GitHub, which is rolling out as part of the Sprint 151 Update, several other scenarios are now possible. Your team can continue accepting bug reports from users as GitHub issues, for example, but still organize the team’s overall and related work in Azure Boards. The same mention syntax applies and can be used within the issue title or description.

GitHub links in an Azure Boards work item

Install the Azure Boards app from the GitHub Marketplace

While you’ve been able to get started with Azure Boards from azure.com/boards for several months now, the new app in the GitHub Marketplace streamlines the acquisition of the service and configuration of your GitHub repository connections. The OAuth and personal access token authentication options will continue to be available for this integration as alternatives, however those use the GitHub identity of the administrator who setup the connection to monitor and link activity. With the app, you’re protected from personnel changes and avoid the confusion of seeing your administrator’s identity editing things like the pull request description to convert a mention to a hyperlink.

Azure Boards listing in GitHub Marketplace

Licensing for Azure Boards remains the same for the app, including the great offer to start with your first few users for free and even more for public projects. You can also learn more about how to configure and use the integration by visiting our GitHub & Azure Boards documentation.

I’m excited for you to try Azure Boards and its integration with GitHub to help you adopt DevOps practices at scale. You can share your thoughts directly with the product team using @AzureDevOps, Developer Community, or comment on this post.

The post Introducing Azure Boards to the GitHub Marketplace appeared first on Azure DevOps Blog.

New code analysis quick fixes for uninitialized memory (C6001) and use before init (C26494) warnings

$
0
0

In the latest Preview release of Visual Studio 2019 version 16.1, we’ve added two quick fixes to the Code Analysis experience focused around uninitialized variable checks. These quick fixes are available via the Quick Actions (lightbulb) menu on relevant lines, accessed by hovering over the line or squiggle, or by pressing Ctrl+Period.

The first release of Visual Studio 2019 brought in-editor code analysis and various C++ productivity improvements, including a quick fix for the NULL to nullptr rule and others. In implementing further code analysis quick fixes, we are basing decisions on the following criteria: 1) the warning should have a low false positive rate; 2) the warning should be high-impact and have a potentially significant downside if not corrected; 3) the warning should have a relatively simple fix. Looking at the most feasible warnings, Preview 3 provides quick fixes for the following:

C6001: using uninitialized memory <variable>

Visual Studio reports warning C6001 when an uninitialized local variable is used before being assigned a value, which can lead to unpredictable results. This warning may be fixed by adding empty curly braces so that the variable/object is value-initialized (will be all zeros).

New Code Analysis quick fixes: C6001

This warning and corresponding quick fix are enabled by default in the Microsoft Native Minimum ruleset.

C26494: VAR_USE_BEFORE_INIT

This warning goes hand-in-hand with the previous one and is fixed in the same wayHowever, while warning C6001 is generated where the uninitialized variable is usedwarning C26494 shows up where the variable is declared.

New Code Analysis quick fixes: CC26494

Note that this warning and corresponding quick fix are not enabled in the default ruleset, but rather under the C++ Core Check Type Rules. To change rulesets in an MSBuild project, navigate to Property Pages > Code Analysis > General; for projects using CMake, add the "codeAnalysisRuleset" key into your CMakeSettings.json with the value set to the full path or the filename of a ruleset file.

Send us feedback 

Thank you to everyone who helps make Visual Studio a better experience for all. Your feedback is critical in ensuring we can deliver the best Code Analysis experience. We’d love for you to download Visual Studio 2019 16.1 Preview 3, give it a try, and let us know how it’s working for you in the comments below or via email. If you encounter problems or have suggestionsplease Report A Problem, or let us know via Visual Studio Developer Community. You can also find us on Twitter @VisualC. 

The post New code analysis quick fixes for uninitialized memory (C6001) and use before init (C26494) warnings appeared first on C++ Team Blog.

Key improvements to the Azure portal user experience

$
0
0

We’re constantly working on user experience improvements in the Azure portal. Our goal is to offer you a productive and easy-to-use platform so you can build, manage, and monitor your service from a single pane of glass.

We’d like to share a few of the exciting updates that improve the user experience:

  • Improvements to global search, with faster load times and smarter results.
  • Faster, more intuitive resource browsing with a variety of display and filtering enhancements.
  • Powerful querying capabilities across your resources via Azure Research Graph.
  • Sign in to the Azure portal using your GitHub account.
  • An enhanced customer experience with improvements to Azure Quickstart Center.
  • Streamlined service creation and more consistent user experiences.
  • Full-screen creation within Azure Application Gateway.
  • Detailed change tracking via Activity Log.

Let’s take a deeper look at some of these improvements.

Improvements to global search

Many of you use global search (at the top of the screen) to find Azure services, resources, resource groups, documentation, or marketplace offerings. Almost 50 percent of searchers use this functionality to find Azure services, and more than 35 percent use it to find resources (instances of services).

Global search has been improved to provide a faster and richer experience. We’ve optimized the performance of both the services and resources sections to provide faster results. The UI now displays the data as it becomes available, resulting in faster availability of results. You’ll find more relevant results when searching for services, and misspelling a service name (“Vrtual Machnes” instead of “Virtual Machines,” for example) is likely to return the result you wanted.

Additionally, the layout has been improved for better readability, and now displays more search results per section.

Improvements in global search results in the Azure portal

Improvements in global search results

When focused, the search box displays the last five recent search terms along with the last five recently used resources.

Global search history in the Azure portal

Global search history

Give the new global search a try. Looking for keyboard shortcuts? Activate global search by pressing G + / anytime during a session.

Navigate through improved resource browsing

Browsing and navigating resources is probably one of your most common use paths. We’ve been improving this experience to deliver faster access to the resources and services that you care about. To begin with, we’ve recently enhanced the categorization in the “All services” list, making it easier to find the service that you’re looking for.

All services view in the Azure portal

All services view

When browsing, each resource list displays resources across all your subscriptions, resource groups, and locations. There are resource-specific lists (specialized in one resource type) and an “All resources” list that shows each of your resources in a single view.

We’ve optimized the “All resources” view for better performance and functionality. Some of the key improvements include:

  • Azure Resource Graph, which delivers significantly improved performance, especially when working with large sets of resources across multiple subscriptions.
  • Pill filters for a cleaner experience and the ability to create complex predicates, including filtering by tags.
  • The ability to export lists of resources to a CSV file.

All resources view in the Azure portal

All resources view

These capabilities are also available in the Resource groups view.

We’re working to bring these performance and experience improvements to the specialized lists of resources, such as virtual machines (VMs), storage, app services, and more. You’ll be able to preview some of those services very soon.

Build rich dashboards with Azure Resource Graph (preview)

Azure Resource Graph, available in preview, enables full visibility into your environments by providing high performance and powerful querying capabilities across all your resources. In the Azure portal experience, you can now utilize the power of Azure Resource Graph to query all the resources across your Azure subscriptions, locations, management groups, and more. You can experience this by typing “resource graph” into global search.

Finding Resource Graph Explorer (preview)

Finding Resource Graph Explorer (preview)

This launches Azure Resource Graph Explorer, a tool that allows you to write queries using Azure Resource Graph Query Language (based in KQL, the same language used in Azure Data Explorer).

Azure Resource Graph Explorer (preview) interface

Azure Resource Graph Explorer (preview) interface

The query editor layout is similar to existing query editors, like Azure SQL Database. There’s a schema explorer on the left, a tabbed section for queries at the top, and a results panel at the bottom. This tool allows you to perform queries over your resources and display the results as tables or charts, depending on the shape of the data.

Writing queries in Azure Resource Graph Explorer (preview)

Writing queries

Query results can then be pinned to Azure portal dashboards by using one of three types of tiles: single value, table, or chart (bar or donut). These dashboards can be shared, secured (via RBAC), and managed like any other Azure resource.

Demo a sample inventory dashboard - in the dashboard, use the “Upload” option to import the sample JSON.

Azure Resource Inventory dashboard example

Dashboard example

Sign in to the Azure portal using your GitHub account

It just got easier for GitHub’s developers to explore Azure. Now developers are be able to sign in to Azure using their GitHub account and configure their repositories for deployment in Azure.

Note that to start exploring Azure, you’ll still need an Azure subscription. If you don’t have one, get started by creating an Azure free account.

For additional details, check out the documentation and the DevOps blog.

Take your first steps with Azure Quickstart Center

The Azure Quickstart Center is a new customer experience intended to help you take your first steps in Azure with confidence. We launched this as a preview at Microsoft Build 2018 and are now proud to announce that it has reached general availability. We’ve also updated the design, based on your feedback, for improved discoverability and better navigation.

Azure Quickstart Center

Azure Quickstart Center

Azure Quickstart Center helps you set up, secure, and manage your Azure environment the right way. You can locate frequently used Azure offerings and create service instances to support the scenarios you’re interested in. The Quickstart Center also includes integration with Microsoft Learn (via the “Take an online course” tab), offering a curated subset of some of the most popular learning paths and modules to get you started with Azure.

Coming in June, the Azure migration guide will be available at the Azure Quickstart Center Setup section.

Azure Quickstart Center can be found in All services, or by using global search.

Create Azure services using a consistent experience

Our goal with these features is simple—we want you to be able to learn a few patterns and apply them everywhere.

At Microsoft Ignite 2018, we introduced an improved and streamlined process for creating instances of virtual machines and storage. This new experience resulted in a 20 percent time-in-task reduction and a 15 percent increase in completion rates. We also stated our goal of making all resource creation experiences in Azure consistent with that experience. You can learn more in the article, “Creation at Cloud Scale.”

Over the past few months, we’ve made great progress on upgrading more services to meet that goal. We’re now delivering a consistently streamlined experience in dozens of services across the Azure portal.

More services offer a consistent creation experience

More services offer a consistent creation experience

We’re not stopping here—in the upcoming months, you’ll see the remaining instance creation pages start to use this new pattern.

Create application gateways with a refreshed experience

You can now preview the new full-screen creation experience for Azure Application Gateway. This experience enables you to view a summary of the application gateway components before you create, and allows you to configure the end-to-end components of your application gateway before deploying it.

You can set up frontends, backend pools, and routing rules, as well as configure listeners and HTTP settings. We’ve also added tabs to make navigation easier, and in-context guidance to help you choose between configuration options.

Application Gateway creation experience

Application Gateway creation experience

View change tracking in Activity Log

The Activity Log shows you what changes happened to a resource during an event. Now you can view this information with Change history in preview. To do so, navigate to Activity Log using the menu on the left side of the portal, choose an event you want to look deeper into, and select the Change history (preview) tab to view any associated changes with the event.

Change history (preview) tab

Change history (preview)

If there are any changes associated with the event, you'll see them as a selectable list. Clicking one opens the Change history (preview) page, which displays details about the change. As you can see in the following example, we’re able to see not only that the VM changed sizes, but also what the previous VM size was, and what it was changed to.

Changes to resources, before and after

Changes to resources, before and after

For more information on resource changes, check out the documentation, “Get resource changes.”

Next steps

We develop these research-driven, innovative experiences based on your feedback. Our goal is to help you be more productive as you build, manage, and monitor your resources through the Azure portal. You can leave feedback and learn more through the following resources:

  • Let us know your thoughts in the comments section below or on Twitter.
  • Sign in the Azure portal today to see the new features.

Generally available: Azure Red Hat OpenShift

$
0
0

At Red Hat Summit 2018, I had the pleasure of working with Scott Guthrie to demonstrate the new managed OpenShift on Azure that we were building into the Azure platform in partnership with Red Hat. Here at Red Hat Summit 2019, I’m thrilled to announce that the fruits of this collaboration have reached general availability. This means that enterprises can use OpenShift for their most critical production workloads and know that both Red Hat and Microsoft are standing behind the service to ensure your success.

The strength of this partnership has been built on the foundation of our work to develop joint support for Red Hat Enterprise Linux. Our learning and growth from that work has demonstrated the unique customer value that we can deliver to enterprises when we work together. It’s a clear demonstration of the power of open source to enable our customers to achieve more. When you are running on Azure Red Hat OpenShift, whether you need support for the OpenShift components on the cluster or the underlying infrastructure, you can have the confidence that you are working with a single, integrated support team composed of Red Hat and Microsoft experts collaborating to understand and accelerate a successful digital transformation of your business. Likewise, you can operate with the confidence that—from the hardware to the software—the service is operating in a robust and compliant manner that ensures the security and privacy of both your corporate and user data.

Azure Red Hat OpenShift: A refresh

If you missed the announcement at Summit last year, Azure Red Hat OpenShift is a fully managed implementation of Red Hat OpenShift deeply integrated into the Azure control plane. This means that you can create clusters in the Azure portal, via the Azure command line tools, or even your own custom code via programming languages SDKs. This also makes integrating cluster creation, scaling, upgrading, and deleting into your existing tools and CI/CD pipelines seamless and easy.

Azure Red Hat OpenShift is a one-of-a-kind solution in the public cloud, offering the best of OpenShift with pro-active 24/7 management and support from both Microsoft and Red Hat. With the general availability of the service, it’s a great time to see how the combination of Microsoft and Red Hat working together can make enterprise software development more agile and reliable, while still living within the confines of enterprise software requirements.

Easy management and integration

Because this is a fully managed service, there are no VMs for you to manage. Patching, upgrading, repair, and disaster recovery are all handled for you as part of the service. This management leaves your application DevOps teams free to focus on operating your applications and not the underlying infrastructure. Likewise, core security technologies like Azure Active Directory are automatically integrated into the OpenShift’s Kubernetes-based control plane so that all the enterprise policies around two-factor access, geo-location, and more automatically apply to people deploying and managing software in your cluster.

Finally, because we understand that cloud native is a journey, Azure Red Hat OpenShift easily integrates with existing virtual networks so that you can achieve connectivity to VM-based infrastructure that hasn’t moved to containers, Azure services that support Virtual Network Service Endpoints, or even via ExpressRoute and VPN to databases and services that are running on-premise.

What’s next

But, just like cloud native and Kubernetes, Azure Red Hat OpenShift is a journey, not a destination. As we look past the General Availability of the service, our teams are actively working to add support for more capabilities like private clusters, bringing your own key (BYOK) for encryption at rest, certificate rotation, Windows Server containers integration, and more. Please provide feedback on UserVoice.

If all of this seems awesome (and it is!) please:

Reshaping the business landscape with serverless APIs

$
0
0

Things are changing for the modern business. API-first development and microservices architecture is opening the door to new innovations. Many of these new approaches are possible in part due to the evolution of serverless technology, which eliminates the need for the management of infrastructure.

Fully managed infrastructure allows for allocating resources to solving a business problem, rather than managing the IT infrastructure. This results in more agility, reduced operating cost, and shorter time-to-market, which is important for organizations of any size.

Serverless is for all, no matter the size

The benefits serverless offers is independent of the size of the company. For example:

Startups need to quickly assess product-market fit and build prototypes to test their hypotheses.

  • With limited resources, startups can build, measure, and iterate their way to success with execution-based pricing models.
  • Unlocks a new generation of startups, all built on the idea that a small group of people with a limited budget can be disruptive.
  • As they evolve, they’ll benefit from serverless much in the same way as larger organizations do.

Enterprises need to adapt to constantly evolving customer requirements to stay competitive with agile, fast moving startups.

  • Serverless enables a business to grow without worrying about managing infrastructure and the planning associated with it.
  • Promotes move to architectural patterns that increase the flexibility and agility of software development.
  • Provides the ability to compete at the same level as more nimble players, while consistently growing the business.

Both benefit equally from a serverless approach, for different reasons.

Improved, stronger integration for API-first applications

Over the past year, API Management has collaborated with Azure Functions to build a stronger integration between the two services. Over the past year, API Management has collaborated with Azure Functions to build a stronger integration between the two services. Our goal is to increase developers’ productivity and provide better, more impactful experiences for creating serverless, API-first applications.

To achieve that goal, we are announcing that two new capabilities are now generally available:

Azure API Management simplifies publishing of APIs as well as their consumption by clients. It allows for abstraction of APIs from their implementation. APIs are governed through policies, managed from a unified plane, optimized through caching, and published for frictionless consumption through a developer portal.

API Management is the front door for your application. Azure Functions provide serverless compute and eliminate the initial friction associated with implementing new applications. Functions allow for agile assembly of prototypes and production-grade solutions.

Moving into the future

The proliferation of APIs and the API economy has given rise to new opportunities for businesses of all sizes. API-first development is now a necessary approach to ensure future success. This is why we are excited about the investments we've made this year and how we are making API architectures easier to adopt by leveraging serverless technology.

Jump right in and get started:

Interested in talking with an expert? Schedule a call with one of our solution experts, for a more personalized approach to starting with serverless, API-first applications.

SAP and Microsoft bring IoT data to the core of the business applications

$
0
0

As a leader in the IoT cloud ecosystem, Microsoft enables a full stack of business applications, within different industries, across the intelligent edge and intelligent cloud. The continued growth of the IoT industry is going to be a transformative force across all organizations. Microsoft and SAP have collaborated for over two decades to enable enterprise SAP solution deployments and the partnership has expanded across the Industrial Internet Consortium, the OPC Foundation, and the Platform Industrie 4.0.

At Mobile World Congress in February, SAP and Microsoft announced our extended collaboration to physical assets in the space of Internet of Things (IoT). Today, we are excited to announce the general availability of SAP Leonardo IoT integration with Azure IoT Hub.

SAP Leonardo IoT integrates with Azure IoT services providing customers with the ability to contextualize and enrich their IoT data with SAP business data to drive new business outcomes. Leveraging Azure IoT Hub and Azure IoT Edge, it provides access to secure connectivity at scale, powerful device management functionality, and industry-leading edge computing support. With the ability to intelligently combine business data to provide industrial IoT capabilities and services consumed by SAP business applications, customers now have a complete view on their data from physical assets to business processes to customer relationships, and offers a full digital feedback loop.

Microsoft intelligent edge diagram

SAP and Microsoft’s common goal is to provide a 360 view of the data from physical assets to business processes enabling customers to remove data silos and realize a full digital feedback loop across the intelligent cloud and intelligent edge. By running SAP Essential Business Functions on Microsoft Azure IoT Edge, customers will be able to extend their S4/Hana and C4/Hana business processes closest to their most valuable assets, providing the capability of business requests and governance at the edge.

Explore more on the Azure IoT and SAP Leonardo IoT Interoperability.

Connecting the colossal: How to scale innovation with serverless integration

$
0
0

Starting the process of migrating to the cloud can be daunting. Legacy systems that are colossal in scale often overwhelm the average team tasked with the mission of digital transformation. How can they possibly untangle years of legacy code to start this new digital transformation initiative? Not only are these systems colossal in scale, but also colossal in terms of business importance. Enterprise applications like SAP and IBM, are integral to the daily rhythm of business. A seemingly simple mistake can result in catastrophic consequence.

Over the past year, Azure Integration Services has been reflecting on solutions to help with these challenges and we’re excited to announce new capabilities:

The challenges facing customers

Over the past year, we've had the opportunity to meet with and hear from customers in-person to discuss the biggest challenges facing their organizations, in terms of innovation. While the tools and technology customers use might be unique to their industry, the high-level challenges encountered are often universal.

Here’s a couple of the common high-level challenges :

  • Developing, onboarding, and scaling new apps and services within existing IT infrastructure. Vipps, the number one payment service in Norway, faced these challenges while making the move from a monolithic application structure to a microservices first architecture.
  • Migrating from on-premises legacy systems to the cloud without disrupting day-to-day operations. Alaska Airlines worked through this by adopting a hybrid approach.
  • Rolling out digital transformation efforts throughout your organization and ensuring the success of these initiative. Finastra tackled this problem by leaning into an API-first solution and created a partner program that unlocked many different new opportunities for them.

These challenges are rooted in integration. By moving to the cloud and creating new, smaller cloud-native services, customers are realizing that the true benefit lies is in how everything works together. Integration is no longer about transforming and moving data from point A to point B, it’s now about how systems of apps, microservices, databases, and on-premises infrastructure are composed to achieve results.

Azure Integration Services leverages serverless technology to remove the resource overhead of managing infrastructure and instead, focus on connecting and composing systems that are adaptable to changing demands.

Looking ahead, our focus is on two key areas:

  • Making the journey to the cloud as smooth and seamless as possible by improving the experience with our products, and how our products work together.
  • Creating an out-of-the-box solution library that provides step-by-step guidance on when and how to use our Azure services to solve business and IT problems around connectivity, integration, and application development.

We know that to achieve in these areas, we must be at the forefront of the changing technology landscape. Integration is the backbone that drives application innovation and development, the journey to the cloud, and long-term success with cloud-native innovation strategies.

To learn more about Azure Integration Services, watch the Azure Friday episode giving an overview of Integration Services.

Interested in talking with an expert? Schedule a call with one of our solutions experts to see how Azure Integration Services can help you.

Introducing health integrated rollouts to Azure Deployment Manager

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>