Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Re-imagining developer productivity with AI-assisted tools

$
0
0

TL;DR: Harnessing the wisdom of the community, Visual Studio IntelliCode is revolutionizing developer productivity. We started with AI-assisted IntelliSense and are now expanding the application of artificial intelligence to significantly accelerate learning, radically improve development agility, and increase code quality by means of two exciting new capabilities: whole line completions and refactoring.

Technology is evolving so fast that every developer is constantly learning, whether you’re adopting a new programming language, API, or architecture (e.g. microservices). Amidst this rate of technological change, existing tools are no longer sufficient for achieving agility as development teams are trying to accelerate their time-to-market  and increase code quality. As a result, development tools need to radically evolve to satisfy the productivity demands of modern teams.

At Microsoft Ignite, we showed a vision of how AI can be applied to developer tools. After talking with thousands of developers over the last couple years, we found that the most highly effective assistance can only come from one source: the collective knowledge of the open source, GitHub community. This is exactly what IntelliCode provides.

AI-assisted suggestions + whole-line code completions

IntelliCode now provides whole-line code completion suggestions mined from the collective intelligence of your trusted developer knowledge bases. This is like having an AI-developer pair-programming with you, providing meaningful, suggestions and whole-line code completions without disrupting your flow. To generate accurate suggestions and provide completion assistance as you code, IntelliCode extends the GPT-2 transformer language model for our machine-learning models to learn about programming languages and coding patterns.

The GPT model architecture, originally developed by OpenAI, has demonstrated strong natural language understanding, including the ability to generate conditional synthetic text examples without needing domain-specific training datasets. For our initial language-specific base models, we adopted an unsupervised learning approach that learns from over 3000 top GitHub repositories. Our base model extracts statistical coding patterns and learns the intricacies of programming languages from GitHub repos to assist developers in their coding. Based on code context, as you type, IntelliCode uses that semantic information and sourced patterns to predict the most likely completion in-line with your code.

Suggested whole-line completions in Visual Studio Code editor
AI-assisted whole-line completions in Visual Studio Code editor

 

IntelliCode has now extended our machine-learning model training capabilities beyond the initial base model to enable your  teams to train their own team completions. Team completions are useful if your development team uses internal utility and base class libraries or domain-specific libraries that aren’t commonly used in open-source GitHub repositories. If you’re using code that isn’t in that set of GitHub repos, those recommendations aren’t as useful to you. By training on your team’s code, IntelliCode learns patterns from your code to make more accurate suggestions. This enables your  team to accelerate  learning and take advantage of the knowledge of your team and broader community.

AI-assisted Refactoring

IntelliCode watches code changes as they occur in the IDE and locally synthesizes, on demand, edit scripts from any set of repetitive pattern changes. It uses these edit scripts to produce suggestions, enabling you to apply repetitive changes quickly or create a pull request to apply the suggestion(s) for team review without distracting your current work. IntelliCode refactorings take the time-intensity and error-proneness out of routine tasks, such as introducing a new helper function. To do so, IntelliCode uses an AI technology called program synthesis, and more specifically, programming-by-examples (PBE).

Quick action menu in Visual Studio IDE showing available actions for IntelliCode refactorings. Available actions are take the repeated edit, ignore the suggestion, or submit a new PR with the suggestion.
IntelliCode refactorings based on repetitive code edits in Visual Studio IDE

 

PBE technology has been developed at Microsoft by the PROSE team and has been applied to various products to enable users to streamline repetitive tasks after providing a few examples. You’ve seen PBE in action in Flash Fill in Excel and webpage table extraction in PowerBI. IntelliCode advances the state-of-the-art in PBE by allowing patterns to be learned from noisy traces as opposed to explicitly provided examples, without any additional steps on your part. You can read more about this in this earlier post. We’re planning to broaden this capability to more languages, as well as enabling your team easily benefit from the patterns that you find.

 

Display of IntelliCode refactoring suggestions in Visual Studio IDE
IntelliCode refactorings in Visual Studio IDE

 

What about privacy and code security?

We know that your code is a vital business asset, so we are committed to a simple principle across our developer tools and services: your code remains your code, your models are your models, and remain your models unless you choose to share them with others. You control when to utilize AI-assistance and who has access to your data.

As examples of this principle in action, when we train a team model for IntelliCode completions on your codebase, we don’t share that model with anyone but you unless you choose to share it, and we locally extract only those elements of the code that are needed to create a model for recommending completion values. We also make it easy for you to ensure that access to your model follows the same security access rules as your code repo when shared, with no extra configuration. You can read more about this in the documentation.

Our PROSE-based models work entirely locally, so your code never leaves your machine.

When it is necessary for us to use a service-based model to deliver a feature, we ensure all appropriate security is in place to secure any information (including code) that is transmitted over the network – the documentation has more information.

What’s next? 

Our ambition is to contribute to developer-assistance across the whole developer lifecycle. We’re particularly interested in making learning and retrieving typical code snippets whilst learning a new API or re-learning an old one, a lot easier. We continue to listen hard to our developer community about where we can best contribute to assist with your daily development challenges.

Get the latest Visual Studio 2019 previews; there, you can try much of this out for C# already. Watch the Visual Studio blog for more announcements as we make progress.

You can also sign up for regular updates and invitations to future private previews.

Try it now

You can try out IntelliCode team models and refactorings in Visual Studio for C# by downloading our latest Visual Studio Preview.

We also support a variety of languages in Visual Studio and Visual Studio Code:

  • Visual Studio: C#, C++, JS/TS, XAML
  • Visual Studio Code: Python, JS/TS, Java, SQL

 

Happy coding!

The post Re-imagining developer productivity with AI-assisted tools appeared first on Visual Studio Blog.


Visual Studio Code C++ extension: Nov 2019 update

$
0
0

The November 2019 update of the Visual Studio Code C++ extension is now available. This latest release comes with a big list of improvements: Find All References, Rename Symbol refactoring, support for localization, new navigation breadcrumb controls, and improvements to the Outline view to name a few. For a more detailed list of changes, check out our release notes on GitHub.

Find All References

Now you can right-click on a C++ symbol anywhere in your file and select Find All References from the context menu. This will search for all instances of a symbol inside the current workspace. Depending on the symbol selected, the search will show at first a progress bar while it is searching for all instances of that symbol or directly display the results in the References Results view.

C++ editor with context menu showing "Find all references" menu item selected

Progress dialog showing "Find all References" operation with 30 files out of 37 completed

The References Results window will display its results into two panes – the top one will show the confirmed results: those instances where IntelliSense confirmed that the text match is also a semantic match for the symbol you searched for. The bottom pane will show all other text matches, categorized by the location where they were found e.g. in a string, in a comment or inside an inactive macro block.

Find all references results showing in the References Window

You can clear individual results from the list or all the results by using the controls in the References Results window. If you clear all the results, you can also review the list of previous searches and you have the option to rerun them.

References results windows showing history of searches

Rename Symbol refactoring

The Rename symbol operation is unquestionably the most requested refactoring tool by C++ developers. With the November 2019 release, we’re happy to announce that this functionality is now supported in the C++ extension. Whether you directly invoke Rename via its keyboard shortcut F2, or select it from the context menu, you will be prompted with a textbox that allow you to introduce the new symbol name.

C++ editor with Rename operation in progress

If all the references to the symbol can be confirmed, the rename operation is performed immediately after you confirm the new name for your symbol. Otherwise, a list of candidates is displayed in the C++ Rename Results window. Before committing the refactor operation, you now have the option to include additional candidates for rename that were found as text matches (not semantic matches) during the search e.g. in strings, comments or inactive blocks.

Pending Rename window showing confirmed changes and additional candidates for rename

To confirm a rename operation, click on the “Commit Rename” action on the “Pending rename” title bar.

Localization support

With this version, the C++ extension UI, command names, tooltips, warnings, and errors are all localized and will respect your choice of language that you selected via the “Configure Display Language” command.

Visual Studio Code with UI elements in Japanese

Navigation breadcrumbs in C++ editors and Outline view improvements

The C++ editor now includes in its navigation breadcrumbs the symbol path up to the cursor position in addition to the file path. To quickly navigate to the Breadcrumbs UI, you can run the “Focus Breadcrumbs” command (default keyboard shortcut for this command is Ctrl+Shift+. or Command+Shift+.). To switch between the different elements in the UI, use the Left and Right keyboard shortcuts (which default to Ctrl+Left Arrow/Right Arrow or Option+Left Arrow/Right Arrow).

C++ editor with breadcrumb dropdown expanded

You can also customize the appearance of breadcrumbs. If you have very long paths or are only interested in either file paths or symbol paths, you can configure breadcrumbs.filePath and breadcrumbs.symbolPath settings (both support on, off, and last). By default, breadcrumbs show icons, but you can remove that by setting breadcrumbs.icons to false.

New in this release also is the ability for Outline view (as well as the new Breadcrumb section) to list the C++ symbols as a hierarchy rather than a flat list.

Outline window showing hierarchy of C++ types

What do you think?

Download the C++ extension for Visual Studio Code today, give it a try, and let us know what you think. If you run into any issues, or have any suggestions, please report them in the Issues section of our GitHub repository. You can also join our Insiders program and get access to early builds of our releaseby going to File > Preferences > Settings (Ctrl+,) and under Extensions > C/C++, change the “C_Cpp: Update Channel” to “Insiders”.

We can be reached via the comments below or in email at visualcpp@microsoft.com. You can also find our team on Twitter at @VisualC.

 

The post Visual Studio Code C++ extension: Nov 2019 update appeared first on C++ Team Blog.

Azure Machine Learning—ML for all skill levels

$
0
0

Enterprises today are adopting artificial intelligence (AI) at a rapid pace to stay ahead of their competition, deliver innovation, improve customer experiences, and grow revenue. AI and machine learning applications are ushering in a new era of transformation across industries from skill sets to scale, efficiency, operations, and governance.

Microsoft Azure Machine Learning provides enterprise-grade capabilities to accelerate the machine learning lifecycle and empowers developers and data scientists of all skill levels to build, train, deploy, and manage models responsibly and at scale. At Microsoft Ignite, we’re announcing a number of major advances to Azure Machine Learning across the following areas:

  • New studio web experience that boosts machine learning productivity for developers and data scientists of all skill levels, with flexible authoring options from no-code drag-and-drop and automated machine learning, to code-first development.
  • New industry-leading Machine Learning Operations (MLOps) capabilities to manage the machine learning lifecycle, enabling data science and IT teams to deliver innovation faster.
  • New open and interoperable capabilities that provide choice and flexibility with support for R, Azure Synapse Analytics, Azure Open Datasets, ONNX, and other popular frameworks, languages, and tools.
  • New security and governance features including role-based access control (RBAC), Azure Virtual Network (VNet), capacity management, and state-of-the-art responsible AI interpretability and fairness capabilities.

Let’s dive into these announcements in detail to see how Azure Machine Learning is helping individuals, teams, and organizations meet and exceed business goals.

Access machine learning for all skill levels and boost productivity

“By improving forecasting using Azure Machine Learning automated ML, we can reduce waste and ensure pizzas are ready for our customers. This will reduce the guesswork for our operators and allow them to spend more time focusing on other aspects of store operations. Rather than guessing how many pizzas to have ready, store operators are focusing on making sure every customer experience is an excellent one.” - Anita Klopfenstein, CEO, Little Caesars Pizza.

The new studio web experience (currently in preview) enables data scientists and data engineers of all skill levels to complete end-to-end machine learning tasks, including data preparation, model training, deployment, and management in a seamless manner. Choose from three different authoring options based on your skill and preference—no-code drag-and-drop designer, automated machine learning, or a code-first notebooks experience. Access Azure Machine Learning assets (including datasets and models) and rich capabilities (including data drift, monitoring, labeling and more) all from a single location.

 

The new studio web experience provides access to all machine learning lifecycle tasks in a single pane.

Studio web experience

Designer (currently in preview) provides drag-and-drop workflows to simplify the process of building, testing, and deploying machine learning models using a visual experience. Customers currently using the classic version of Azure Machine Learning Studio are encouraged to try Designer so they can benefit from the scale and security of Azure Machine Learning.

Automated machine learning user interface (currently in preview) helps data scientists build models without writing a single line of code. Automate the time-intensive tasks of feature engineering, algorithm selection, and hyperparameter sweeping, then operationalize your model with a few clicks of a button.

Notebooks (currently in preview) are a fully managed solution for developers and data scientists to easily get started with machine learning, with pre-configured custom environments that eliminate setup time, while providing management and enterprise readiness capabilities for IT administrators.

New data labeling (currently in preview). High quality labeled data is vital to creating high accuracy models for supervised learning. Teams can now manage data labeling projects seamlessly from within the studio web experience to get labels against data, speeding up the time-intensive process of manual labeling. Labeling tasks supported include object detection, multi-class image classification, and multi-label image classification.

Operationalize at scale with industry-leading MLOps

Azure Machine Learning features built-in MLOps capabilities for enterprise-grade machine learning lifecycle management, that enables data science and IT teams to collaborate and increase the pace of model development and deployment.

“TransLink was able to leverage MLOps in Azure Machine Learning to build and manage models and deploy them in production. This created greater efficiencies and transparency as we moved over 16,000 machine learning models from pilot to production. Ultimately, TransLink customers benefited with improvement between predicted and actual bus departure times of 74%, so they can better plan their journey on TransLink's bus network.” - Sze-Wan Ng, Director Analytics & Development, Translink.

New updates to build reproducible models and achieve machine learning governance and control

Datasets help data scientists and machine learning engineers easily access data from a number of Azure storage services, apply datasets rapidly, reuse them efficiently across tasks, and track data lineage automatically. Rich dataset and model registries help track assets and information to effectively operationalize models and simplify workflows from training to inferencing. Version control helps track and manage assets providing enhanced traceability and supporting the creation of reproducible pipelines for consistent model delivery. Audit trail capabilities ensure asset integrity and provide control logs to help meet regulatory requirements.

New updates to easily deploy models and efficiently manage the machine learning lifecycle

Batch inference helps increase productivity and decrease cost by generating predictions on terabytes of structured or unstructured data. Controlled roll-out enables the deployment of different model versions under a common scoring endpoint in order to implement a sophisticated deployment pipeline and release models with confidence. Data drift monitoring helps maintain model accuracy by detecting model performance issues from changes to model input data over time. Drift analysis includes magnitude of drift, contribution by feature, and other insights so that appropriate action can be taken, including retraining the model.

 

MLOps capabilities like data drift visualizations provide metrics such as drift magnitude that is increasing here over time and feature contribution to drift, in the studio web experience.

Data drift monitoring

Innovate using open and interoperable capabilities

With Azure Machine Learning, developers and data scientists can access built-in support for open source tools and frameworks like PyTorch, TensorFlow, and scikit-learn, or the open and interoperable ONNX format. We now support Open Neural Network Exchange (ONNX), the open standard for representing machine learning. With the new v1.0 release, ONNX Runtime offers stable Python APIs that can be used in Azure Machine Learning on both CPU and GPU.

New R-based capabilities enable data scientists to run R jobs on Azure Machine Learning and then manage and deploy R models as web services. Data scientists can choose their development environment of choice—one-click access to the browser integrated development (IDE) of RStudio Server (open source edition) or Jupyter with R.

Azure Synapse Analytics is now deeply integrated with Azure Machine Learning to greatly expand the discovery of insights from all your data and apply machine learning models to your intelligent apps.

Azure Open Datasets are now generally available and provide curated datasets, hosted on Azure, and easily accessible from Azure Machine Learning workspaces to accelerate model training. Over 25 datasets are now available, including socio-economic data, satellite imagery, and more. New datasets are continuously being added, and you can nominate additional datasets to Azure.

Build on a secure foundation

“With Azure Machine Learning our data scientist teams can work in an environment supported with industry standard trust and compliance. Enterprise readiness capabilities like RBAC VNet, Key Vault ensure that we have granular control over our resources and deliver innovation on  a secure platform that enhances productivity so that teams can focus on machine learning tasks rather than infrastructure and setup.”-  Cary Goltermann, Manager, Ignition Tax, KPMG LLP.

Security and enterprise readiness updates

Workspace capacity management (currently in preview) helps administrators review compute usage across workspaces and clusters within a subscription for efficient resource distribution. Capacity limits can be set to reallocate resources for capacity management and governance. Role Based Access Control, or RBAC, (in preview) helps define custom roles for granular access control and supports advanced security scenarios. Virtual network, or VNet, (in preview) provides a security boundary to isolate compute resources used to train and deploy models when running experiments through inferencing.

Fairness: In addition to model interpretability in Azure Machine Learning, which supports transparency and model understanding, data scientists and developers can now leverage Fairlearn, the new open source fairness assessment and mitigation tool. This tool assists organizations with uncovering insights about fairness in their model predictions through an intuitive and configurable set of visualizations.

 

Fairness capabilities help uncover insights about model fairness such as the visualization which shows disparity in predictions across the subgroups in this case based on gender.

Fairness feature insights

Start building today

We are excited to bring you these capabilities to help accelerate the machine learning lifecycle, from new productivity experiences that make machine learning accessible to all skill levels, to robust MLOps and enterprise-grade security, built on an open and trusted platform. We are committed to continued investments in machine learning to support your business and applications and help you drive business transformation with AI.


Azure. Invent with purpose.

Azure Arc: Extending Azure management to any infrastructure

$
0
0

If you are like many of our customers, you run a mix of applications in your on-premises datacenters, in the cloud and at the edge. We have been on a journey over the last few years to bring you hybrid innovations to meet you where you are. We have invested in individual connected management services such as Azure Monitor and Azure Backup. We have also delivered a consistent platform through Azure Stack Hub, ensuring that investments made in Azure can be used in disconnected environments.

Many enterprises still face a sprawl of resources spread across multiple datacenters, clouds, and edge locations. Our customers tell us that they are looking for a cloud-native control plane to inventory, organize, and enforce policies for their IT resources wherever they are, from a central place.

At Microsoft Ignite this week, we're taking another major step forward with our hybrid technology. We are announcing Azure Arc, a set of technologies that extends the control plane of Azure out to on-premises, multi-cloud environments and edge. Azure Arc enables customers to have a central, unified, and self-service approach to manage their Windows and Linux Servers, Kubernetes clusters, and Azure data services wherever they are. Azure Arc also extends adoption of cloud practices like DevOps and Azure security across on-premises, multi-cloud, and edge. In addition to extending the control plane for management, Azure Arc enables customers to run Azure data services anywhere.

Extend Azure management across your environments

Hundreds of millions of Azure resources are organized, governed, and secured daily by customers using Azure Resource Manager. Azure Resource Manager is the control plane in Azure that provides robust deployment, management, and governance capabilities with Azure Cloud Shell, Azure portal, API, role-based access control (RBAC) and Azure Policy for all Azure resources.

A key aspect of Azure Arc is the work we’ve done to extend Azure Resource Manager beyond Azure so that customers have a central and unified approach to manage Windows and Linux Servers, Kubernetes clusters and Azure data services at scale across on-premises, multi-cloud, and edge.

Azure Resource Manager and Azure Arc graphic

Azure Arc extends Azure management across on-premises, multi-cloud, and edge

Using Azure Arc to govern across environments

To illustrate the above scenarios of Azure Arc, let's take a look at a large financial organization that has sprawling server-based IT systems and Kubernetes clusters deployed in datacenters, private, and public clouds. The sprawl creates difficulty to have visibility across their environment and makes it harder to manage, govern and meet compliance requirements.

With Azure Arc, they can manage servers and Kubernetes clusters to get the following benefits:

  • Asset organization and inventory of Windows and Linux Servers, Kubernetes clusters and Azure services with a unified view in the Azure portal and API
  • Universal governance of customer resources through Azure Policy
  • Standardized role-based access control (RBAC) across systems and different types of resources
  • Enable application owners to apply and audit their applications to meet compliance requirements
  • Ability to measure and remediate compliance at scale and down to the individual application, server, or cluster

Adopting cloud practices on-premises

Azure provides cloud DevOps and cloud-native configuration management at scale for all Azure resources. Such cloud practices are optimized for developers that need immediate and programmatic access to resources to create new cloud-native applications. Azure Arc extends these capabilities to any infrastructure across on-premises, multi-cloud, and edge environments. Developers can build containerized apps with the tools of their choice and IT teams can use configuration as code to ensure that the apps are deployed, configured, and governed uniformly using GitOps-based configuration management across on-premises, multi-cloud, and edge.

Screenshot of OnPrem Configuration management

Adopt cloud practices like config management at scale

Deploy to and manage multiple locations at scale

To illustrate the above scenario of Azure Arc, let's take a look at a retailer with 100s of stores that would like to move all in-store applications to containers running on a Kubernetes clusters. They are faced with the challenge of how to uniformly deploy, configure, and manage their containerized applications across multiple locations.

Image of Azure Resource Manager and Azure Arc text with globeWith Azure Arc, IT and development teams can manage the app in existing stores, and quickly light up a new location by automating error-prone and procedural tasks. Additionally, they get the following benefits:

  • At scale configuration and deployment based on Azure subscriptions, resource groups, and tags
  • GitOps-based model for deploying configuration-as-code to one or many clusters
  • Application deployment and update at scale
  • Source control based safe deployment practices when rolling out new applications and configurations
  • Freedom for developers to use the tools they are familiar with

Implement Azure security anywhere

We know the importance of security and compliance to businesses, so we brought our leadership in cloud security to on-premises, multi-cloud and edge with Azure Arc. We built Azure Arc to bring capabilities and practices such as RBAC, Azure activity log for auditing actions, Azure Lighthouse for secure delegated management and enforcement of security policies through Azure Policy.

Screenshot of RBAC check access

Get started

We will be sharing more updates on Azure Arc at Microsoft Ignite this week. To learn more about Azure Arc, visit the Azure Arc page.

If you're at Microsoft Ignite this week, please attend the following sessions to learn more:
BRK 2208 Introduction to Azure Arc on Tuesday, Nov 05 at 11:45 am ET
BRK 3327 Azure Arc: Extend Management and Governance on Wednesday, Nov 06 at 1:00 PM ET

You can get started right away by previewing management of Windows and Linux servers across on-premises, multi-cloud, and edge right away. Join the preview to get started with managing Windows and Linux Servers anywhere using Azure Arc.

Sign up for more information on Azure data services anywhere enabled by Azure Arc, and management of Kubernetes clusters by Azure Arc.


Azure. Invent with purpose.

Intel Optane DC Persistent memory, Azure NetApp Files, and Azure Ultra Disk for SAP HANA

$
0
0

With the recent preferred cloud partnership with SAP, both companies are committed to ensuring that we provide customers with a simplified path for the migration from on-premises SAP ERP to SAP S/4HANA in the cloud, on Azure. Microsoft Azure enables customers to be future-ready, and for SAP customers our promise is to continue to offer market-leading innovation to support mission-critical SAP HANA and SAP S/4HANA workloads. With the recent general availability of Azure Mv2 virtual machines offering up to 12 TB of memory, purpose-built SAP HANA on Azure large instances offering scale up to 24 TB and scale-out up to 120 TB, 32 SAP certified configurations, global availability of SAP HANA infrastructure in 34 Azure regions, 99.99 percent SLA for availability, Azure offers the best scale, performance, global availability, and reliability for mission-critical SAP applications.

SAP HANA on Azure Large Instances with Intel Optane DC persistent memory

Today, we’re announcing another market-leading innovation for SAP HANA customers with the general availability of new SAP HANA on Azure Large Instances, powered by second generation Intel Xeon Scalable processors (codenamed Cascade Lake) and Intel Optane DC persistent memory. These instances are offered in single-node configurations with 3 TiB to 9 TiB of memory and 4 socket, 224 vCPUs and are generally available now. We are working with SAP towards TDIv5 certification for the Intel Optane persistent memory based instances.

SKU Total memory (TB) DDR4 memory Intel Optane persistent memory (TB) SAP HANA certification
S224 3 3 - OLTP, OLAP scale-up and scale-out up to 16 nodes
S224oo 4.5 1.5 3 Planned: OLAP and OLTP; Customer workload specific TDIv5
S224m 6 6 - OLTP
S224m 6 6 - Planned: OLAP; Customer workload specific TDIv5
S224om 6 3 3 Planned: OLAP and OLTP; Customer workload specific TDIv5
S224ooo 7.5 1.5 6 Planned: OLAP and OLTP; Customer workload specific TDIv5
S224oom 9 3 6 Planned: OLAP and OLTP; Customer workload specific TDIv5

We worked with SAP and Intel to bring the power of second generation Intel Xeon Scalable processors and Optane persistent memory, which combines the properties of the persistence of an SSD and access time similar to DRAM, to deliver the following tangible benefits to SAP HANA customers. First, Intel Xeon Scalable processors provide higher performance and a higher memory ratio per processor. Coupled with Optane persistent memory, customers can now run these instances with much higher memory to processor ratio under SAP TDIv5 certification, reducing the number of instances required for scale-up and scale-out scenarios, enabling a much lower total cost of ownership (TCO.)  Since Optane technology is persistent, the SAP HANA column store is available even after a power cycle, which is required for maintenance situations. Intel’s tests with SAP HANA and Intel Optane persistent memory have shown load time reduction of 12x and this reduces the maintenance time window. Without persistent memory, the time for table loads from disk can take hours. Because of the rapid data load times for restart scenarios, for some non-critical production systems, this can eliminate the need for high availability (HA) configurations, saving cost and complexity.

Azure Ultra Disk for SAP HANA

Mission-critical SAP HANA deployments not only need the most scalable compute but also need high performance storage, to persist SAP HANA transactions quickly. Until now, Azure Premium SSD was the only Azure storage option that was certified for SAP HANA deployments on Azure Virtual Machines.

A few months ago, we announced the general availability of Azure Ultra Disk, a new high-performance storage offering, that delivers up to 160K IOPS and 2 GBps throughput with sub-millisecond latency on a single disk. Azure Ultra Disk is now certified for SAP HANA with M-series, Mv2-series, and Ev3-series virtual machines (VMs.) The low latency and high throughput offered by Ultra Disk can significantly accelerate SAP HANA database transactions. With the ability to dynamically change the provisioned IOPS and throughput on Ultra Disk, customers can now meet seasonal SAP workload needs at lower costs, without provisioning for peak performance year round.

SAP HANA scale-out on Azure and Azure NetApp Files

SAP HANA provides scale-out configurations for SAP applications such as SAP Business Warehouse (BW) or S/4HANA. To improve the availability of such scale-out configurations, SAP HANA supports architectures where standby nodes are set aside in addition to the nodes performing the actual work. Such a standby node can take the role of an active node that is handling the workload, in case of patching or a malfunction of the active node. One of the basic requirements for such a scale-out plus standby node configuration is a high performing and low latency storage architecture that allows sharing of the HANA disk volumes across all nodes.

With Azure’s purpose-built SAP HANA on Azure large instances, we lead the industry in offering high performance compute with such a low latency shared storage, enabling many mission-critical SAP scale-out deployments. CONA services, the services arm for Coca-Cola bottlers, chose Azure to runs one of the largest SAP HANA deployment in the public cloud on Azure, at 28 TB in a 7+1 node configuration, because of the higher availability with the purpose-built shared NFS storage. Over the last few months, CONA services has been able to seamlessly grow their scale-out cluster to 40 TB in a 10+2 (10 active, 2 standby) cluster, an impressive scale, serving 160,000 orders a day.

Today, we’re sharing the unique possibility to create such SAP HANA scale-out configurations with standby node on HANA certified Azure VMs and Azure NetApp Files, our purpose-built bare-metal file-storage service powered by NetApp. The Azure native NFS v4.1 service offered on Azure NetApp Files is unique amongst all the hyperscale cloud providers, with low storage latency and high throughput to fulfill all SAP HANA certification criteria. Customers deploying SAP HANA scale-out with standby node on Azure VMs such as M, Mv2, and E-series and Azure NetApp Files can achieve significantly higher availability, simplified maintenance and higher performance at a lower TCO. Beyond offering scale-out plus standby node configurations with Azure’s HANA Large Instances, Azure is the only hyperscale cloud provider, that now offers SAP HANA scale-out with standby node configurations for Virtual Machines. Azure NetApp Files is now available in 11 regions.

Customers migrating SAP workloads to Azure

With Azure’s continuous innovation for SAP HANA infrastructure services, deep partnership offerings with SAP, dedicated expertise in-house and through partners for SAP migration, we continue to see an uptick in the number of SAP customers migrating their mission-critical SAP workloads to Azure. Here are a few recent customers that have completed that journey.

Cemex: Cemex is a global leader in building materials based in Mexico, serving customers in 50 countries. Cemex chose Microsoft Azure for its digital transformation with SAP starting with the migration of its Asia SAP landscape from SAP ECC on Oracle to ECC on SAP HANA. After migrating to SAP HANA on Azure, Cemex sees a 70 percent increase in transaction performance, 93 percent faster provisioning time. Cemex also leverages Microsoft PowerBI with SAP HANA to accelerate business insights with easy to use, self-service BI reporting.

Achmea: Achmea is a Fortune 500 company and one of the leading insurance companies in Europe, with ten million customers and annual gross premium revenues of almost €20 billion. To become future ready and increase business agility, Achmea migrated to Azure for its mission-critical SAP BW, SAP Fraud Management, and SAP HANA data mart applications, running on SUSE Linux Enterprise Server. By migrating these SAP HANA based applications to Microsoft Azure, Achmea has gained a flexible, scalable, compliant, and enterprise-class platform for running mission-critical workloads.

TomTom: TomTom is a leading European Telematics service provider, serving hundreds of millions of customers. TomTom runs a SAP ERP and SAP BW at the core of their enterprise and when their hardware on-premises could not keep up with the growing SAP HANA database demand, TomTom decided to migrate their SAP systems to Azure and completed the migration in under three months. By running SAP on Azure, TomTom has benefited from the agility of spinning up SAP environments in hours vs weeks and higher availability and stability.

Thames Water: Thames Water manages the water supply for 10 million customers across London and the Thames Valley. The company relies on insights from data to solve problems on its network proactively, including leaks. To accelerate a manual, time-consuming process which could take 3-5 weeks, Thames Water decided to migrate its SAP systems to Azure to support faster, easier innovation. Working with Centiq, an SAP on Azure Partner, and Microsoft, Thames Water built deployment automation for its SAP BW and SAP S/4HANA systems by leveraging Azure APIs, Terraform, and Ansible. Today, they are able to spin-up an entire SAP system in under four hours, boosting agility, reducing operational costs, and increasing visibility into customer data.

To learn more about running SAP solutions on Azure, visit the SAP on Azure web page.

Intel, the Intel logo, Xeon, and Optane are trademarks of Intel Corporation in the U.S. and/or other countries.

 


Azure. Invent with purpose.

Bring Azure data services to your infrastructure with Azure Arc

$
0
0

With the exponential growth in data, organizations find themselves in increasingly heterogenous data estates, full of data sprawl and silos, spreading across on-premises data centers, the edge, and multiple public clouds. It has been a balancing act for organizations trying to bring about innovation faster while maintaining consistent security and governance. The lack of a unified view of all their data assets across their environments poses extra complexity for best practices in data management.

As Satya announced in his vision keynote at Microsoft Ignite, we are redefining hybrid by bringing innovation anywhere with Azure. We are introducing Azure Arc, which brings Azure services and management to any infrastructure. This enables Azure data services to run on any infrastructure using Kubernetes. Azure SQL Database and Azure Database for PostgreSQL Hyperscale are both available in preview on Azure Arc, and we will bring more data services to Azure Arc over time.

For customers who need to maintain data workloads in on-premises datacenters due to regulations, data sovereignty, latency, and so on, Azure Arc can bring the latest Azure innovation, cloud benefits like elastic scale and automation, unified management, and unmatched security on-premises. 

Always current

A top pain point we continue to hear from customers is the amount of work involved in patching and updating their on-premises databases. It requires constant diligence from corporate IT to ensure all databases are updated in a timely fashion. A fully managed database service, such as Azure SQL Database, removes the burden of patching and upgrades for customers who have migrated their databases to Azure.

Azure Arc helps to fully automate the patching and update process for databases running on-premises. Updates from the Microsoft Container Registry are automatically delivered to customers, and deployment cadences are set by customers in accordance with their policies. This way, on-premises databases can stay up to date while ensuring customers maintain control.

Azure Arc also enables on-premises customers to access the latest innovations such as the evergreen SQL through Azure SQL Database, which means customers will no longer face end-of-support for their databases. Moreover, a unique hyper-scale deployment option of Azure Database for PostgreSQL is made available on Azure Arc. This capability gives on-premises data workloads an additional boost on capacity optimization, using unique scale-out across reads and writes without application downtime.

Elastic scale

Cloud elasticity on-premises is another unique capability Azure Arc offers customers. The capability enables customers to scale their databases up or down dynamically in the same way as they do in Azure, based on the available capacity of their infrastructure. This can satisfy burst scenarios that have volatile needs, including scenarios that require ingesting and querying data in real-time, at any scale, with sub-second response time. In addition, customers can also scale-out database instances by setting up read replicas across multiple data centers or from their own data center into any public cloud.

Azure Arc also brings other cloud benefits such as fast deployment and automation at scale. Thanks to Kubernetes-based execution, customers can deploy a database in seconds, setting up high availability, backup, point-in-time-restore with a few clicks. Compare this to the time and resource-consuming manual work that is currently required to do the same on-premises, these new capabilities will greatly improve productivity of database administration and enable faster continuous integration and continuous delivery, so the IT team can be more agile to unlock business innovation.

Unified management

Using familiar tools such as the Azure portal, Azure Data Studio, and the Azure CLI, customers can now gain a unified view of all their data assets deployed with Azure Arc. Customers are able to not only view and manage a variety of relational databases across their environment and Azure, but also get logs and telemetry from Kubernetes APIs to analyze the underlying infrastructure capacity and health. Besides having localized log analytics and performance monitoring, customers can now leverage Azure Monitor on-premises for comprehensive operational insights across their entire estate. Moreover, Azure Backup can be easily connected to provide long-term, off-site backup retention and disaster recovery. Best of all, customers can now use cloud billing models for their on-premises data workloads to manage their costs efficiently.

See a full suite of management capabilities provided by Azure Arc (Azure Arc data controller) from the below diagram.

Diagram of full suite management capabilities provided by Azure Arc

Unmatched security

Security is a top priority for corporate IT. Yet it has been challenging to keep up the security posture and maintain consistent governance on data workloads across different customer teams, functions, and infrastructure environments. With Azure Arc, for the first time, customers can access Azure’s unique security capabilities from the Azure Security Center for their on-premises data workloads. They can protect databases with features like advanced threat protection and vulnerability assessment, in the same way as they do in Azure.

Azure Arc also extends governance controls from Azure so that customers can use capabilities such as Azure Policy and Azure role-based access control across hybrid infrastructure. This consistency and well-defined boundaries at scale can bring peace of mind to IT regardless of where the data is.

Learn more about the unique benefits with Azure Arc for data workloads.


Azure. Invent with purpose.

.NET Core 3 for Windows Desktop

$
0
0

Intro

In September, we released .NET Core support for building Windows desktop applications, including WPF and Windows Forms. Since then, we have been delighted to see so many developers share their stories of migrating desktop applications (and controls libraries) to .NET Core. We constantly hear stories of .NET Windows desktop developers powering their business with WPF and Windows Forms, especially in scenarios where the desktop shines, including:

  • UI-dense forms over data (FOD) applications
  • Responsive low-latency UI
  • Applications that need to run offline/disconnected
  • Applications with dependencies on custom device drivers

This is just the beginning for Windows application development on .NET Core. Read on to learn more about the benefits of .NET Core for building Windows applications.

Why Windows desktop on .NET Core?

.NET Core (and in the future .NET 5 that is built on top of .NET Core) will be the future of .NET. We are committed to support .NET Framework for years to come, however it will not be receiving any new features, those will only be added to .NET Core (and eventually .NET 5). To improve Windows desktop stacks and enable .NET desktop developers to benefit from all the updates of the future, we brought Windows Forms and WPF to .NET Core. They will still remain Windows-only technologies because there are tightly coupled dependencies to Windows APIs. But .NET Core, besides being cross-platform, has many other features that can enhance desktop applications.

First of all, all the runtime improvements and language features will be added only to .NET Core and in the future to .NET 5. A good example here is C# 8 that became available in .NET Core 3.0. Besides, the .NET Core versions of Windows Forms and WPF will become a part of the .NET 5 platform. So, by porting your application to .NET Core today you are preparing them for .NET 5.

Also, .NET Core brings deployment flexibility for your applications with new options that are not available in .NET Framework, such as:

  • Side-by-side deployment. Now you can have multiple .NET Core versions on the same machine and can choose which version each of your apps should target.
  • Self-contained deployment. You can deploy the .NET Core platform with your applications and become completely independent of your end users environment – your app has everything it needs to run on any Windows machine.
  • Smaller app sizes. In .NET Core 3 we introduced a new feature called linker (also sometimes referred to as trimmer), that will analyze your code and include in your self-contained deployment only those assemblies from .NET Core that are needed for your application. That way all platform parts that are not used for your case will be trimmed out.
  • Single .exe files. You can package your app and the .NET Core platform all in one .exe file.
  • Improved runtime performance. .NET Core has many performance optimizations compared to .NET Framework. When you think about the history of .NET Core, built initially for web and server workloads, it helps to understand if your application may see noticeable benefits from the runtime optimizations. Specifically, desktop applications with heavy dependencies on File I/O, networking, and database operations will likely see improvements to performance for those scenarios. Some areas where you may not notice much change are in UI rendering performance or application startup performance.

By setting the properties <PublishSingleFile>, <RuntimeIdentifier> and <PublishTrimmed> in the project file you’ll be able to deploy a trimmed self-contained application as a single .exe file as it is shown in the example below.

<PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.0</TargetFramework>
    <PublishSingleFile>true</PublishSingleFile>
    <RuntimeIdentifier>win-x64</RuntimeIdentifier>
    <PublishTrimmed>true</PublishTrimmed>
</PropertyGroup>

Differences between .NET Framework desktop and .NET Core desktop

While developing desktop applications, you won’t notice much difference between .NET Framework and .NET Core versions of WPF and Windows Forms. A part of our effort was to provide a functional parity between these platforms in the desktop area and enhance the .NET Core experience in the future. WPF applications are fully supported on .NET Core and ready for you to use, while we are working on minor updates and improvements. For Windows Forms the runtime part is fully ported to .NET Core and the team is working on the Windows Forms Designer. We are planning to get it ready by the fourth quarter of 2020 and for now you can check out the Preview version of the designer in Visual Studio 16.4 Preview 3 or later. Don’t forget to set the checkbox in the Tools->Options->Preview Features->Use the Preview of Windows Forms designer for .NET Core apps and restart the Visual Studio. Please keep in mind that the experience is limited for now since the work on it is in progress.

Breaking changes

There are a few breaking changes between .NET Framework and .NET Core but most of the code related to Windows Forms and WPF areas was ported to Core as-is. If you were using such components as WCF Client, Code Access Security, App Domains, Interop and Remoting, you will need to refactor your code if you want to switch to .NET Core.

Another thing to keep in mind – the default output paths on .NET Core is different from on .NET Framework, so if you have some assumptions in your code about file/folder structure of the running app then it’ll probably fail at runtime.

Also, there are changes in how you configure the .NET features. .NET Core instead of machine.config file uses <something>.runtimeconfig.json file that comes with an application and has the same general purpose and similar information. Some configurations such as system.diagnostics, system.net, or system.servicemodel are not supported, so an app config file will fail to load if it contains any of these sections. This change affects System.Diagnostics tracing and WCF client scenarios which were commonly configured using XML configuration previously. In .NET Core you’ll need to configure them in code instead. To change behaviors without recompiling, consider setting up tracing and WCF types using values loaded from a Microsoft.Extensions.Configuration source or from appSettings.

You can find more information on differences between .NET Core and .NET Framework in the documentation.

Getting Started

Check out these short video tutorials:

Porting from .NET Framework to .NET Core

First of all, run the Portability Analyzer and if needed, update your code to get a 100% compatibility with .NET Core. Here are instructions on using the Portability Analyzer. We recommend to use a source control or to backup your code before you make any changes to your application in case the refactoring would not go the way you want, and you decide to go back to your initial state.

When your application is fully compatible with .NET Core, you are ready to port it. As a starting point, you can try out a tool we created to help automate converting your .NET Framework project(s) to .NET Core – try-convert.

It’s important to remember that this tool is just a starting point in your journey to .NET Core. It is also not a supported Microsoft product. Although it can help you with some of the mechanical aspects of migration, it will not handle all scenarios or project types. If your solution has projects that the tool rejects or fails to convert, you’ll have to port by hand. No worries, we have plenty of tutorials on how to do it (in the end of this section).

The try-convert tool will attempt to migrate your old-style project files to the new SDK-style and retarget applicable projects to .NET Core. For your libraries we leave it up to you to make a call regarding the platform: weather you’d like to target .NET Core or .NET Standard. You can specify it in your project file by updating the value for <TargetFramework>. Libraries without .NET Core-specific dependencies like WPF or Windows Forms may benefit from targeting .NET Standard:

<TargetFramework>netstandard2.1</TargetFramework>

so that they can be used by callers targeting many different .NET Platforms. On the other hand, if a library uses a feature that requires .NET Core (like Windows desktop UI APIs), it’s fine for the library to target .NET Core:

<TargetFramework>netcoreapp3.0</TargetFramework>

try-convert is a global tool that you can install on your machine, then you can call from CLI:

C:> try-convert -p <path to your project>

or

C:> try-convert -w <path to your solution>

As previously mentioned, if the try-convert tool did not work for you, here are materials on how to port your application by hand.

Videos

Documentation

The post .NET Core 3 for Windows Desktop appeared first on .NET Blog.

Growing partner opportunity with Azure innovation

$
0
0

At Microsoft Ignite, we are sharing a wealth of new products and business news, across Microsoft’s unique technology stack—spanning on-premises, client, server, and cloud. For Microsoft Azure, we have always believed in building products and programs that help our customers invent with purpose. Our announcements reinforce this belief and deliver on our promises of helping customers be future ready, build on their terms, operate hybrid seamlessly, and do all this with an uncompromising foundation of trust.

To see a full list and details of these Azure announcements, please visit the Microsoft Ignite webpage here

These announcements also unlock tremendous opportunities for you, our partners, to acquire new customers and grow Azure projects within your existing customer base. In this blog, we want to dig deeper in two key announcements, highlight the respective opportunities, resources, and how to take action.

Grow your business with Azure services that now run anywhere with new hybrid capabilities

At Microsoft Ignite, we take a leap forward in enabling customers to move from just hybrid cloud to truly deliver innovation anywhere with Azure.

  • To give customers the benefits of cloud innovation, including always up-to-date data capabilities, we’re delivering the ability for customers to run Azure data services anywhere.
  • Millions of Azure resources are organized, governed, and secured daily by customers using Azure management. Azure Arc extends these Azure management capabilities to Linux and Windows servers, as well as Kubernetes clusters on any infrastructure across datacenter, multi-cloud, and edge.
  • We are also expanding our Azure Stack Hub portfolio to offer our customers even more flexibility with the addition of Azure Stack Edge. Azure Stack Edge, previously Azure Data Box Edge, is a managed AI-enabled edge appliance that brings compute, storage, and intelligence to any edge.

As an Azure partner, Azure Arc now enables you to manage a customer’s infrastructure through one consistent and unified set of tools across on-premises, multi-cloud, and at the edge. You can also implement cloud security across a customer’s environment with centralized role-based access control, security policies, and advanced threat protection.

With Azure Lighthouse, you now have the ability to consistently manage a customer's Azure environment, and on-premises resources available via Azure Arc, from a single control plane, applying automation at scale.

Learn more about these exciting new hybrid capabilities.

Expand your analytics practice with Azure Synapse Analytics

We also announced Azure Synapse Analytics, a limitless analytics service, that brings together enterprise data warehousing and big data analytics. Simply put, Azure Synapse Analytics is the next evolution of Azure SQL Data Warehouse, delivering limitless scale, powerful insights, unified experience, and unmatched security. We have taken our industry leading data warehouse to a whole new level of performance and capabilities. Businesses can continue running their existing data warehouse workloads in production today with Azure Synapse Analytics, and will automatically benefit from the new capabilities which are in preview.

If you are a data partner, Azure Synapse Analytics opens up new opportunities to help new and existing customers get more out of their business data. Through the unified experience and unmatched security, you can address the needs of everybody, from those of data engineers managing pipelines to the needs of business analysts trying to securely access datasets. If your practice helps customers garner insights for their business, Azure Synapse Analytics enables quick business insights and machine learning, reducing the time to get the insights for the customer. All this at limitless scale to grow with the needs of your customers. And for any independent software vendor (ISV) apps that worked with Azure SQL Data Warehouse, they will keep working with Azure Synapse Analytics.

Learn more about Azure Synapse Analytics.

We also assembled a curated set of resources for you to learn more about these new capabilities and respond to your customers' needs. These resources are located on our partners page.


Azure. Invent with purpose.


Expanding the Azure Stack portfolio to run hybrid applications across the cloud, datacenters, and the edge

Improved Continuous Delivery capabilities and caching for Azure Pipelines

$
0
0

We are thrilled to announce a new set of updates for Azure Pipelines, the Continuous Integration and Continuous Delivery platform part of Azure DevOps. Our team has been hard at work for the past few months to deliver new features, including some that were much-requested by our users.

Improved CD capabilities for multi-stage YAML pipelines

Developers using multi-stage YAML pipelines can now manage deployment workflows in separate pipelines and trigger them using resources. As part of this experience we have added support for consuming other pipelines and images from Azure Container Registry (ACR) as resources.

“pipeline” as a resource

If you have an Azure Pipeline that produces artifacts, you can consume the artifacts by defining a pipeline resource, and you can also enable your pipeline to be automatically triggered on the completion of that pipeline resource.

resources:
  pipelines:
  - pipeline: SmartHotel
    source: SmartHotel-CI 
    trigger: 
      branches:
      - 'releases/*'
      - master

You can find more details about pipeline resource here.

Azure Container Registry as “container” resource

You can also use Azure Container Registry (ACR) container images to trigger the execution of a pipeline in Azure Pipelines when a new image is published to ACR:

resources:
  containers:
  - container: MyACR  
    type: ACR
    AzureSubscription: RMPM
    resourcegroup: contosoRG
    registry: contosodemo
    repository: alphaworkz
    trigger: 
      tags:
        include: 
        - 'production*'

You can find more details about ACR resource here.

Just like with repositories, we support complete traceability for both pipelines and ACR container resource. For example, you can see the changes and the work items that were consumed in a run and the environment they were deployed to.

New deployment strategies

One of the key advantages of continuous delivery of application updates is the ability to quickly push updates into production for specific microservices. This gives you the ability to quickly respond to changes in business requirements.

With this update, we are announcing support for two new deployment strategies: canary for Azure Kubernetes Service (AKS), and rolling for Virtual Machines (in private preview). This is in addition to the rolling deployment strategy support for AKS that we introduced in the spring. Support for blue-green deployments and other resource types is coming in a few months.

Canary: In this strategy, the newer version (canary) is deployed next to the current version (stable), but only a portion of traffic is routed to canary to minimize risk. Once canary is found to be good based on metrics and other parameters, the exposure to newer version is gradually increased.

Previously, when the canary strategy was specified with the KubernetesManifest task, the task created baseline and canary workloads whose replicas equaled a percentage of the replicas used for stable workloads. This was not exactly the same as splitting traffic up to the desired percentage at the request level. To tackle this, we have now introduced support for Service Mesh Interface-based canary deployments in KubernetesManifest task. Service Mesh Interface (SMI) abstraction allows for plug-and-play configuration with service mesh providers such as Linkerd and Istio, while the KubernetesManifest task takes away the hard work of mapping SMI’s TrafficSplit objects to the stable, baseline and canary services during the lifecycle of the deployment strategy. The desired percentage split of traffic between stable, baseline and canary is more accurate as the percentage traffic split is done at the request level in the service mesh plane.

Consider the following pipeline for performing canary deployments on Kubernetes using the Service Mesh Interface in an incremental manner:

- deployment:
  displayName:
  pool:
    vmImage: $(pool)
  environment: ignite.smi
  strategy:
    canary:
      increments: [25, 50]
      deploy:
        steps:
        - checkout: self
        - task: KubernetesManifest@0
          displayName: Deploy canary
          inputs:
            action: $(strategy.action)
            namespace: smi
            strategy: $(strategy.name)
            trafficSplitMethod: smi
            percentage: $(strategy.increment)
            baselineAndCanaryReplicas: 1
            manifests: 'manifests/*'
            containers: '$(imageRepository):$(Build.BuildId)'
      postRouteTraffic:
        pool: server
        steps:
          - task: Delay@1
            inputs:
              delayForMinutes: '2'
      on:
        failure:
          steps:            
          - script: echo deployment failed...
          - task: KubernetesManifest@0
            inputs:
              action: reject
              namespace: smi
              strategy: $(strategy.name)
              manifests: 'manifests/*'
        success:
          steps:
          - script: echo 'Successfully deployed'

In this case, request are routed incrementally to the canary deployment (at 25%, 50%, 100%) while providing a way for the user to gauge the health of the application between each of these increments under the postRouteTraffic lifecycle hook.

Both canary and rolling strategies support following lifecycle hooks: preDeploy (executed once), iterations with deploy, routeTraffic and postRouteTraffic lifecycle hooks, and exit with either success and failure hooks.

To learn more, check out the YAML schema for deployment jobs and the deployment strategies design document.

We are looking for early feedback on support for Virtual Machine resource in environments and performing rolling deployment strategy across multiple machines, which is now available in private preview. You can enroll here.

Pipeline Artifacts and Pipeline Caching

Pipeline Artifacts and Pipeline Caching are now generally available.

You can use Pipeline Artifacts to store build outputs and move intermediate files between jobs in your pipeline. You can also download the artifacts from a pipeline from the build page, for as long as the build is retained. Pipeline Artifacts are the new generation of build artifacts: they take advantage of existing services to dramatically reduce the time it takes to store outputs in your pipelines. 

Pipeline Caching can help reduce build time by allowing the outputs or downloaded dependencies from one run to be reused in later runs, thereby reducing or avoiding the cost to recreate or re-download the same files again. Caching is especially useful in scenarios where the same dependencies are compiled or downloaded over and over at the start of each run. This is often a time-consuming process involving hundreds or thousands of network calls.

What’s next

We’re constantly at work to improve the Azure Pipelines experience for our users. You can take a sneak peek into the work that’s planned for the next months by looking at our updated roadmap for this quarter.

As always, please let us know if you have any feedback by posting on our Developer Community, or by reaching out on Twitter at @AzureDevOps.

The post Improved Continuous Delivery capabilities and caching for Azure Pipelines appeared first on Azure DevOps Blog.

Announcing TypeScript 3.7

$
0
0

We’re thrilled to announce the release of TypeScript 3.7, a release packed with awesome new language, compiler, and tooling features.

If you haven’t yet heard of TypeScript, it’s a language based on JavaScript that adds static type-checking along with type syntax. Static type-checking lets us know about problems with our code before we try to run it by reporting errors if we do something questionable. This ranges from type coercions that can happen in code like 42 / "hello", or even basic typos on property names. But beyond this, TypeScript powers things like completions, quick fixes, and refactorings for both TypeScript and JavaScript in some of your favorite editors. In fact, if you already use Visual Studio or Visual Studio Code, you might already be using TypeScript when you write JavaScript code! So if you’re interested in learning more, head over to our website.

If you’re already ready to use TypeScript, you can get it through NuGet, or use npm with the following command:

npm install typescript

You can also get editor support by

We’ve got a lot of great features in TypeScript 3.7, including:

This is a pretty extensive list! If you’re into reading, you’re in for some fun with this release. But if you’re the type of person who likes to learn by getting their hands dirty, check out the TypeScript playground where we’ve added an entire menu for learning what’s new.

A screenshot of the TypeScript playground which now has a section for learning what's new.

Without further ado, let’s dive in and look at what’s new!

Optional Chaining

TypeScript 3.7 implements one of the most highly-demanded ECMAScript features yet: optional chaining!

Optional chaining is issue #16 on our issue tracker. For context, there have been over 23,000 issues filed on the TypeScript issue tracker to date. This one was filed over 5 years ago – before there was even a formal proposal within TC39. s For years, we’ve been asked to implement the feature, but our stance has long been not to conflict with potential ECMAScript proposals. Instead, our team recently took the steps to help drive the proposal to standardization, and ultimately to all JavaScript and TypeScript users. In fact, we became involved to the point where we were championing the proposal! With its advancement to stage 3, we’re comfortable and proud to release it as part of TypeScript 3.7.

So what is optional chaining? Well at its core, optional chaining lets us write code where we can immediately stop running some expressions if we run into a null or undefined. The star of the show in optional chaining is the new ?. operator for optional property accesses. When we write code like

let x = foo?.bar.baz();

this is a way of saying that when foo is defined, foo.bar.baz() will be computed; but when foo is null or undefined, stop what we’re doing and just return undefined.”

More plainly, that code snippet is the same as writing the following.

let x = (foo === null || foo === undefined) ?
    undefined :
    foo.bar.baz();

Note that if bar is null or undefined, our code will still hit an error accessing baz. Likewise, if baz is null or undefined, we’ll hit an error at the call site. ?. only checks for whether the value on the left of it is null or undefined – not any of the subsequent properties.

You might find yourself using ?. to replace a lot of code that performs repetitive nullish checks using the && operator.

// Before
if (foo && foo.bar && foo.bar.baz) {
    // ...
}

// After-ish
if (foo?.bar?.baz) {
    // ...
}

Keep in mind that ?. acts differently than those && operations since && will act specially on “falsy” values (e.g. the empty string, 0, NaN, and, well, false), but this is an intentional feature of the construct. It doesn’t short-circuit on valid data like 0 or empty strings.

Optional chaining also includes two other operations. First there’s the optional element access which acts similarly to optional property accesses, but allows us to access non-identifier properties (e.g. arbitrary strings, numbers, and symbols):

/**
 * Get the first element of the array if we have an array.
 * Otherwise return undefined.
 */
function tryGetFirstElement<T>(arr?: T[]) {
    return arr?.[0];
    // equivalent to
    //   return (arr === null || arr === undefined) ?
    //       undefined :
    //       arr[0];
}

There’s also optional call, which allows us to conditionally call expressions if they’re not null or undefined.

async function makeRequest(url: string, log?: (msg: string) => void) {
    log?.(`Request started at ${new Date().toISOString()}`);
    // roughly equivalent to
    //   if (log != null) {
    //       log(`Request started at ${new Date().toISOString()}`);
    //   }

    const result = (await fetch(url)).json();

    log?.(`Request finished at at ${new Date().toISOString()}`);

    return result;
}

The “short-circuiting” behavior that optional chains have is limited property accesses, calls, element accesses – it doesn’t expand any further out from these expressions. In other words,

let result = foo?.bar / someComputation()

doesn’t stop the division or someComputation() call from occurring. It’s equivalent to

let temp = (foo === null || foo === undefined) ?
    undefined :
    foo.bar;

let result = temp / someComputation();

That might result in dividing undefined, which is why in strictNullChecks, the following is an error.

function barPercentage(foo?: { bar: number }) {
    return foo?.bar / 100;
    //     ~~~~~~~~
    // Error: Object is possibly undefined.
}

More more details, you can read up on the proposal and view the original pull request.

Nullish Coalescing

The nullish coalescing operator is another upcoming ECMAScript feature that goes hand-in-hand with optional chaining, and which our team has been involved with championing.

You can think of this feature – the ?? operator – as a way to “fall back” to a default value when dealing with null or undefined. When we write code like

let x = foo ?? bar();

this is a new way to say that the value foo will be used when it’s “present”; but when it’s null or undefined, calculate bar() in its place.

Again, the above code is equivalent to the following.

let x = (foo !== null && foo !== undefined) ?
    foo :
    bar();

The ?? operator can replace uses of || when trying to use a default value. For example, the following code snippet tries to fetch the volume that was last saved in localStorage (if it ever was); however, it has a bug because it uses ||.

function initializeAudio() {
    let volume = localStorage.volume || 0.5

    // ...
}

When localStorage.volume is set to 0, the page will set the volume to 0.5 which is unintended. ?? avoids some unintended behavior from 0, NaN and "" being treated as falsy values.

We owe a large thanks to community members Wenlu Wang and Titian Cernicova Dragomir for implementing this feature! For more details, check out their pull request and the nullish coalescing proposal repository.

Assertion Functions

There’s a specific set of functions that throw an error if something unexpected happened. They’re called “assertion” functions. As an example, Node.js has a dedicated function for this called assert.

assert(someValue === 42);

In this example if someValue isn’t equal to 42, then assert will throw an AssertionError.

Assertions in JavaScript are often used to guard against improper types being passed in. For example,

function multiply(x, y) {
    assert(typeof x === "number");
    assert(typeof y === "number");

    return x * y;
}

Unfortunately in TypeScript these checks could never be properly encoded. For loosely-typed code this meant TypeScript was checking less, and for slightly conservative code it often forced users to use type assertions..

function yell(str) {
    assert(typeof str === "string");

    return str.toUppercase();
    // Oops! We misspelled 'toUpperCase'.
    // Would be great if TypeScript still caught this!
}

The alternative was to instead rewrite the code so that the language could analyze it, but this isn’t convenient.

function yell(str) {
    if (typeof str !== "string") {
        throw new TypeError("str should have been a string.")
    }
    // Error caught!
    return str.toUppercase();
}

Ultimately the goal of TypeScript is to type existing JavaScript constructs in the least disruptive way. For that reason, TypeScript 3.7 introduces a new concept called “assertion signatures” which model these assertion functions.

The first type of assertion signature models the way that Node’s assert function works. It ensures that whatever condition is being checked must be true for the remainder of the containing scope.

function assert(condition: any, msg?: string): asserts condition {
    if (!condition) {
        throw new AssertionError(msg)
    }
}

asserts condition says that whatever gets passed into the condition parameter must be true if the assert returns (because otherwise it would throw an error). That means that for the rest of the scope, that condition must be truthy. As an example, using this assertion function means we do catch our original yell example.

function yell(str) {
    assert(typeof str === "string");

    return str.toUppercase();
    //         ~~~~~~~~~~~
    // error: Property 'toUppercase' does not exist on type 'string'.
    //        Did you mean 'toUpperCase'?
}

function assert(condition: any, msg?: string): asserts condition {
    if (!condition) {
        throw new AssertionError(msg)
    }
}

The other type of assertion signature doesn’t check for a condition, but instead tells TypeScript that a specific variable or property has a different type.

function assertIsString(val: any): asserts val is string {
    if (typeof val !== "string") {
        throw new AssertionError("Not a string!");
    }
}

Here asserts val is string ensures that after any call to assertIsString, any variable passed in will be known to be a string.

function yell(str: any) {
    assertIsString(str);

    // Now TypeScript knows that 'str' is a 'string'.

    return str.toUppercase();
    //         ~~~~~~~~~~~
    // error: Property 'toUppercase' does not exist on type 'string'.
    //        Did you mean 'toUpperCase'?
}

These assertion signatures are very similar to writing type predicate signatures:

function isString(val: any): val is string {
    return typeof val === "string";
}

function yell(str: any) {
    if (isString(str)) {
        return str.toUppercase();
    }
    throw "Oops!";
}

And just like type predicate signatures, these assertion signatures are incredibly expressive. We can express some fairly sophisticated ideas with these.

function assertIsDefined<T>(val: T): asserts val is NonNullable<T> {
    if (val === undefined || val === null) {
        throw new AssertionError(
            `Expected 'val' to be defined, but received ${val}`
        );
    }
}

To read up more about assertion signatures, check out the original pull request.

Better Support for never-Returning Functions

As part of the work for assertion signatures, TypeScript needed to encode more about where and which functions were being called. This gave us the opportunity to expand support for another class of functions: functions that return never.

The intent of any function that returns never is that it never returns. It indicates that an exception was thrown, a halting error condition occurred, or that the program exited. For example, process.exit(...) in @types/node is specified to return never.

In order to ensure that a function never potentially returned undefined or effectively returned from all code paths, TypeScript needed some syntactic signal – either a return or throw at the end of a function. So users found themselves return-ing their failure functions.

function dispatch(x: string | number): SomeType {
    if (typeof x === "string") {
        return doThingWithString(x);
    }
    else if (typeof x === "number") {
        return doThingWithNumber(x);
    }
    return process.exit(1);
}

Now when these never-returning functions are called, TypeScript recognizes that they affect the control flow graph and accounts for them.

function dispatch(x: string | number): SomeType {
    if (typeof x === "string") {
        return doThingWithString(x);
    }
    else if (typeof x === "number") {
        return doThingWithNumber(x);
    }
    process.exit(1);
}

As with assertion functions, you can read up more at the same pull request.

--declaration and --allowJs

The --declaration flag in TypeScript allows us to generate .d.ts files (declaration files) from TypeScript source files (i.e. .ts and .tsx files). These .d.ts files are important for a couple of reasons.

First of all, they’re important because they allow TypeScript to type-check against other projects without re-checking the original source code. They’re also important because they allow TypeScript to interoperate with existing JavaScript libraries that weren’t built with TypeScript in mind. Finally, a benefit that is often underappreciated: both TypeScript and JavaScript users can benefit from these files when using editors powered by TypeScript to get things like better auto-completion.

Unfortunately, --declaration didn’t work with the --allowJs flag which allows mixing TypeScript and JavaScript input files. This was a frustrating limitation because it meant users couldn’t use the --declaration flag when migrating codebases, even if they were JSDoc-annotated. TypeScript 3.7 changes that, and allows the two options to be used together!

The most impactful outcome of this feature might a bit subtle: with TypeScript 3.7, users can write libraries in JSDoc annotated JavaScript and support TypeScript users.

The way that this works is that when using allowJs, TypeScript has some best-effort analyses to understand common JavaScript patterns; however, the way that some patterns are expressed in JavaScript don’t necessarily look like their equivalents in TypeScript. When declaration emit is turned on, TypeScript figures out the best way to transform JSDoc comments and CommonJS exports into valid type declarations and the like in the output .d.ts files.

As an example, the following code snippet

const assert = require("assert")

module.exports.blurImage = blurImage;

/**
 * Produces a blurred image from an input buffer.
 * 
 * @param input {Uint8Array}
 * @param width {number}
 * @param height {number}
 */
function blurImage(input, width, height) {
    const numPixels = width * height * 4;
    assert(input.length === numPixels);
    const result = new Uint8Array(numPixels);

    // TODO

    return result;
}

Will produce a .d.ts file like

/**
 * Produces a blurred image from an input buffer.
 *
 * @param input {Uint8Array}
 * @param width {number}
 * @param height {number}
 */
export function blurImage(input: Uint8Array, width: number, height: number): Uint8Array;

This can go beyond basic functions with @param tags too, where the following example:

/**
 * @callback Job
 * @returns {void}
 */

/** Queues work */
export class Worker {
    constructor(maxDepth = 10) {
        this.started = false;
        this.depthLimit = maxDepth;
        /**
         * NOTE: queued jobs may add more items to queue
         * @type {Job[]}
         */
        this.queue = [];
    }
    /**
     * Adds a work item to the queue
     * @param {Job} work 
     */
    push(work) {
        if (this.queue.length + 1 > this.depthLimit) throw new Error("Queue full!");
        this.queue.push(work);
    }
    /**
     * Starts the queue if it has not yet started
     */
    start() {
        if (this.started) return false;
        this.started = true;
        while (this.queue.length) {
            /** @type {Job} */(this.queue.shift())();
        }
        return true;
    }
}

will be transformed into the following .d.ts file:

/**
 * @callback Job
 * @returns {void}
 */
/** Queues work */
export class Worker {
    constructor(maxDepth?: number);
    started: boolean;
    depthLimit: number;
    /**
     * NOTE: queued jobs may add more items to queue
     * @type {Job[]}
     */
    queue: Job[];
    /**
     * Adds a work item to the queue
     * @param {Job} work
     */
    push(work: Job): void;
    /**
     * Starts the queue if it has not yet started
     */
    start(): boolean;
}
export type Job = () => void;

Note that when using these flags together, TypeScript doesn’t necessarily have to downlevel .js files. If you simply want TypeScript to create .d.ts files, you can use the --emitDeclarationOnly compiler option.

For more details, you can check out the original pull request.

(More) Recursive Type Aliases

Type aliases have always had a limitation in how they could be “recursively” referenced. The reason is that any use of a type alias needs to be able to substitute itself with whatever it aliases. In some cases, that’s not possible, so the compiler rejects certain recursive aliases like the following:

type Foo = Foo;

This is a reasonable restriction because any use of Foo would need to be replaced with Foo which would need to be replaced with Foo which would need to be replaced with Foo which… well, hopefully you get the idea! In the end, there isn’t a type that makes sense in place of Foo.

This is fairly consistent with how other languages treat type aliases, but it does give rise to some slightly surprising scenarios for how users leverage the feature. For example, in TypeScript 3.6 and prior, the following causes an error.

type ValueOrArray<T> = T | Array<ValueOrArray<T>>;
//   ~~~~~~~~~~~~
// error: Type alias 'ValueOrArray' circularly references itself.

This is strange because there is technically nothing wrong with any use users could always write what was effectively the same code by introducing an interface.

type ValueOrArray<T> = T | ArrayOfValueOrArray<T>;

interface ArrayOfValueOrArray<T> extends Array<ValueOrArray<T>> {}

Because interfaces (and other object types) introduce a level of indirection and their full structure doesn’t need to be eagerly built out, TypeScript has no problem working with this structure.

But workaround of introducing the interface wasn’t intuitive for users. And in principle there really wasn’t anything wrong with the original version of ValueOrArray that used Array directly. If the compiler was a little bit “lazier” and only calculated the type arguments to Array when necessary, then TypeScript could express these correctly.

That’s exactly what TypeScript 3.7 introduces. At the “top level” of a type alias, TypeScript will defer resolving type arguments to permit these patterns.

This means that code like the following that was trying to represent JSON…

type Json =
    | string
    | number
    | boolean
    | null
    | JsonObject
    | JsonArray;

interface JsonObject {
    [property: string]: Json;
}

interface JsonArray extends Array<Json> {}

can finally be rewritten without helper interfaces.

type Json =
    | string
    | number
    | boolean
    | null
    | { [property: string]: Json }
    | Json[];

This new relaxation also lets us recursively reference type aliases in tuples as well. The following code which used to error is now valid TypeScript code.

type VirtualNode =
    | string
    | [string, { [key: string]: any }, ...VirtualNode[]];

const myNode: VirtualNode =
    ["div", { id: "parent" },
        ["div", { id: "first-child" }, "I'm the first child"],
        ["div", { id: "second-child" }, "I'm the second child"]
    ];

For more information, you can read up on the original pull request.

The useDefineForClassFields Flag and The declare Property Modifier

Back when TypeScript implemented public class fields, we assumed to the best of our abilities that the following code

class C {
    foo = 100;
    bar: string;
}

would be equivalent to a similar assignment within a constructor body.

class C {
    constructor() {
        this.foo = 100;
    }
}

Unfortunately, while this seemed to be the direction that the proposal moved towards in its earlier days, there is an extremely strong chance that public class fields will be standardized differently. Instead, the original code sample might need to de-sugar to something closer to the following:

class C {
    constructor() {
        Object.defineProperty(this, "foo", {
            enumerable: true,
            configurable: true,
            writable: true,
            value: 100
        });
        Object.defineProperty(this, "bar", {
            enumerable: true,
            configurable: true,
            writable: true,
            value: void 0
        });
    }
}

While TypeScript 3.7 isn’t changing any existing emit by default, we’ve been rolling out changes incrementally to help users mitigate potential future breakage. We’ve provided a new flag called useDefineForClassFields to enable this emit mode with some new checking logic.

The two biggest changes are the following:

  • Declarations are initialized with Object.defineProperty.
  • Declarations are always initialized to undefined, even if they have no initializer.

This can cause quite a bit of fallout for existing code that use inheritance. First of all, set accessors from base classes won’t get triggered – they’ll be completely overwritten.

class Base {
    set data(value: string) {
        console.log("data changed to " + value);
    }
}

class Derived extends Base {
    // No longer triggers a 'console.log' 
    // when using 'useDefineForClassFields'.
    data = 10;
}

Secondly, using class fields to specialize properties from base classes also won’t work.

interface Animal { animalStuff: any }
interface Dog extends Animal { dogStuff: any }

class AnimalHouse {
    resident: Animal;
    constructor(animal: Animal) {
        this.resident = animal;
    }
}

class DogHouse extends AnimalHouse {
    // Initializes 'resident' to 'undefined'
    // after the call to 'super()' when
    // using 'useDefineForClassFields'!
    resident: Dog;

    constructor(dog: Dog) {
        super(dog);
    }
}

What these two boil down to is that mixing properties with accessors is going to cause issues, and so will re-declaring properties with no initializers.

To detect the issue around accessors, TypeScript 3.7 will now emit get/set accessors in .d.ts files so that TypeScript can check for overridden accessors.

Code that’s impacted by the class fields change can get around the issue by converting field initializers to assignments in constructor bodies.

class Base {
    set data(value: string) {
        console.log("data changed to " + value);
    }
}

class Derived extends Base {
    constructor() {
        this.data = 10;
    }
}

To help mitigate the second issue, you can either add an explicit initializer or add a declare modifier to indicate that a property should have no emit.

interface Animal { animalStuff: any }
interface Dog extends Animal { dogStuff: any }

class AnimalHouse {
    resident: Animal;
    constructor(animal: Animal) {
        this.resident = animal;
    }
}

class DogHouse extends AnimalHouse {
    declare resident: Dog;
//  ^^^^^^^
// 'resident' now has a 'declare' modifier,
// and won't produce any output code.

    constructor(dog: Dog) {
        super(dog);
    }
}

Currently useDefineForClassFields is only available when targeting ES5 and upwards, since Object.defineProperty doesn’t exist in ES3. To achieve similar checking for issues, you can create a seperate project that targets ES5 and uses --noEmit to avoid a full build.

For more information, you can take a look at the original pull request for these changes.

We strongly encourage users to try the useDefineForClassFields flag and report back on our issue tracker or in the comments below. This includes feedback on difficulty of adopting the flag so we can understand how we can make migration easier.

Build-Free Editing with Project References

TypeScript’s project references provide us with an easy way to break codebases up to give us faster compiles. Unfortunately, editing a project whose dependencies hadn’t been built (or whose output was out of date) meant that the editing experience wouldn’t work well.

In TypeScript 3.7, when opening a project with dependencies, TypeScript will automatically use the source .ts/.tsx files instead. This means projects using project references will now see an improved editing experience where semantic operations are up-to-date and “just work”. You can disable this behavior with the compiler option disableSourceOfProjectReferenceRedirect which may be appropriate when working in very large projects where this change may impact editing performance.

You can read up more about this change by reading up on its pull request.

Uncalled Function Checks

A common and dangerous error is to forget to invoke a function, especially if the function has zero arguments or is named in a way that implies it might be a property rather than a function.

interface User {
    isAdministrator(): boolean;
    notify(): void;
    doNotDisturb?(): boolean;
}

// later...

// Broken code, do not use!
function doAdminThing(user: User) {
    // oops!
    if (user.isAdministrator) {
        sudo();
        editTheConfiguration();
    }
    else {
        throw new AccessDeniedError("User is not an admin");
    }
}

Here, we forgot to call isAdministrator, and the code incorrectly allows non-adminstrator users to edit the configuration!

In TypeScript 3.7, this is identified as a likely error:

function doAdminThing(user: User) {
    if (user.isAdministrator) {
    //  ~~~~~~~~~~~~~~~~~~~~
    // error! This condition will always return true since the function is always defined.
    //        Did you mean to call it instead?

This check is a breaking change, but for that reason the checks are very conservative. This error is only issued in if conditions, and it is not issued on optional properties, if strictNullChecks is off, or if the function is later called within the body of the if:

interface User {
    isAdministrator(): boolean;
    notify(): void;
    doNotDisturb?(): boolean;
}

function issueNotification(user: User) {
    if (user.doNotDisturb) {
        // OK, property is optional
    }
    if (user.notify) {
        // OK, called the function
        user.notify();
    }
}

If you intended to test the function without calling it, you can correct the definition of it to include undefined/null, or use !! to write something like if (!!user.isAdministrator) to indicate that the coercion is intentional.

We owe a big thanks to GitHub user @jwbay who took the initiative to create a proof-of-concept and iterated to provide us with with the current version.

Flatter Error Reporting

Sometimes, pretty simple code can lead to long pyramids of error messages in TypeScript. For example, this code

type SomeVeryBigType = { a: { b: { c: { d: { e: { f(): string } } } } } }
type AnotherVeryBigType = { a: { b: { c: { d: { e: { f(): number } } } } } }

declare let x: SomeVeryBigType;
declare let y: AnotherVeryBigType;

y = x;

resulted in the following error message in previous versions of TypeScript:

Type 'SomeVeryBigType' is not assignable to type 'AnotherVeryBigType'.
  Types of property 'a' are incompatible.
    Type '{ b: { c: { d: { e: { f(): string; }; }; }; }; }' is not assignable to type '{ b: { c: { d: { e: { f(): number; }; }; }; }; }'.
      Types of property 'b' are incompatible.
        Type '{ c: { d: { e: { f(): string; }; }; }; }' is not assignable to type '{ c: { d: { e: { f(): number; }; }; }; }'.
          Types of property 'c' are incompatible.
            Type '{ d: { e: { f(): string; }; }; }' is not assignable to type '{ d: { e: { f(): number; }; }; }'.
              Types of property 'd' are incompatible.
                Type '{ e: { f(): string; }; }' is not assignable to type '{ e: { f(): number; }; }'.
                  Types of property 'e' are incompatible.
                    Type '{ f(): string; }' is not assignable to type '{ f(): number; }'.
                      Types of property 'f' are incompatible.
                        Type '() => string' is not assignable to type '() => number'.
                          Type 'string' is not assignable to type 'number'.

The error message is correct, but ends up intimidating users through a wall of repetitive text. The ultimate thing we want to know is obscured by all the information about how we got to a specific type.

[We iterated on ideas])(microsoft/TypeScript/issues/33361), so now in TypeScript 3.7, errors like this are flattened to a message like the following:

Type 'SomeVeryBigType' is not assignable to type 'AnotherVeryBigType'.
  The types returned by 'a.b.c.d.e.f()' are incompatible between these types.
    Type 'string' is not assignable to type 'number'.

For more details, you can check out the original PR.

// @ts-nocheck in TypeScript Files

TypeScript 3.7 allows us to add // @ts-nocheck comments to the top of TypeScript files to disable semantic checks. Historically this comment was only respected in JavaScript source files in the presence of checkJs, but we’ve expanded support to TypeScript files to make migrations easier for all users.

Semicolon Formatter Option

TypeScript’s built-in formatter now supports semicolon insertion and removal at locations where a trailing semicolon is optional due to JavaScript’s automatic semicolon insertion (ASI) rules. The setting is available now in Visual Studio Code Insiders, and will be available in Visual Studio 16.4 Preview 2 in the Tools Options menu.

New semicolon formatter option in VS Code

Choosing a value of “insert” or “remove” also affects the format of auto-imports, extracted types, and other generated code provided by TypeScript services. Leaving the setting on its default value of “ignore” makes generated code match the semicolon preference detected in the current file.

Website and Playground Updates

We’ll be talking more about this in the near future, but if you haven’t seen it already, you should check out the significantly upgraded TypeScript playground which now includes awesome new features like quick fixes to fix errors, dark/high-contrast mode, and automatic type acquisition so you can import other packages! On top of all of that, each feature here is explained through interactive code snippets under the “what’s new” menu.

As a cherry on top, outside of the handbook we now have search powered by Algolia on the website, allowing you to search through the handbook, release notes, and more!

Search on the TypeScript website.

Feel free to keep an eye on development of the website over here.

Breaking Changes

DOM Changes

Types in lib.dom.d.ts have been updated. These changes are largely correctness changes related to nullability, but impact will ultimately depend on your codebase.

Class Field Mitigations

As mentioned above, TypeScript 3.7 emits get/set accessors in .d.ts files which can cause breaking changes for consumers on older versions of TypeScript like 3.5 and prior. TypeScript 3.6 users will not be impacted, since that version was future-proofed for this feature.

While not a breakage per se, opting in to the useDefineForClassFields flag can cause breakage when:

  • overriding an accessor in a derived class with a property declaration
  • re-declaring a property declaration with no initializer

To understand the full impact, read the section above on the useDefineForClassFields flag.

Function Truthy Checks

As mentioned above, TypeScript now errors when functions appear to be uncalled within if statement conditions. An error is issued when a function type is checked in if conditions unless any of the following apply:

  • the checked value comes from an optional property
  • strictNullChecks is disabled
  • the function is later called within the body of the if

Local and Imported Type Declarations Now Conflict

Due to a bug, the following construct was previously allowed in TypeScript:

// ./someOtherModule.ts
interface SomeType {
    y: string;
}

// ./myModule.ts
import { SomeType } from "./someOtherModule";
export interface SomeType {
    x: number;
}

function fn(arg: SomeType) {
    console.log(arg.x); // Error! 'x' doesn't exist on 'SomeType'
}

Here, SomeType appears to originate in both the import declaration and the local interface declaration. Perhaps surprisingly, inside the module, SomeType refers exclusively to the imported definition, and the local declaration SomeType is only usable when imported from another file. This is very confusing and our review of the very small number of cases of code like this in the wild showed that developers usually thought something different was happening.

In TypeScript 3.7, this is now correctly identified as a duplicate identifier error. The correct fix depends on the original intent of the author and should be addressed on a case-by-case basis. Usually, the naming conflict is unintentional and the best fix is to rename the imported type. If the intent was to augment the imported type, a proper module augmentation should be written instead.

API Changes

To enable the recursive type alias patterns described above, the typeArguments property has been removed from the TypeReference interface. Users should instead use the getTypeArguments function on TypeChecker instances.

What’s Next?

As you enjoy TypeScript 3.7, you can take a glance at what’s coming in TypeScript 3.8! We recently posted the TypeScript 3.8 Iteration Plan, and we’ll be updating our rolling feature Roadmap as more details come together.

We want our users to truly feel joy when they write code, and we hope that TypeScript 3.7 does just that. So enjoy, and happy hacking!

– Daniel Rosenwasser and the TypeScript Team

The post Announcing TypeScript 3.7 appeared first on TypeScript.

Join the Visual Studio for Mac ASP.NET Core Challenge

$
0
0

In September, we shared Visual Studio 2019 for Mac v8.3 with you – this was our biggest release yet for.NET developers working on a Mac. Today, we’d like to invite you to take part in a new community challenge in which you can interact directly with our team to improve Visual Studio for Mac, explore some great looking ASP.NET Core samples, and earn prizes!

Taking the ASP.NET Core Challenge

Are you ready to jump in? There are just three simple steps to enter:

Download the Visual Studio for Mac 8.3 release or update to it from within the IDE using the Stable channel in the Updater.

Now, create or modify an ASP.NET Core site with a web UI using Visual Studio for Mac; you can use the samples from the control libraries mentioned below, or build your own. We only ask that you try our new HTML/CSHTML, CSS, or JavaScript/TypeScript editors.

Finally, send us a Tweet of the UI that you build, using the @VisualStudioMac handle on Twitter along with the hashtags #vs4macChallenge #sweepstakes. Include a screen shot of your running app and an example of the source code you used, running in Visual Studio for Mac! We’ll reply with a survey you can take to share your experience. The survey will also allow you to share your contact information with us if you’d like to be entered for a chance to get swag.

Swag!

To thank you for your time submitting and filling out our follow-up survey, we have some fun swag giveaways for you! Each submission is entered into a random drawing for 1 of:

  • 3 grand prizes: A product license from GrapeCity, Syncfusion, or Progress. Or,
  • 100 first prizes: A $15.00 voucher for the .NET Foundation Store, to purchase a shirt or stickers.

To be eligible, you’ll need to share your email address with us in the survey mentioned earlier.

For more details on eligibility, please read the Official Rules for this sweepstakes.

ASP.NET Core Samples to help you out

To give you some interesting new samples to explore in this challenge, 3 great ASP.NET Core control libraries for you to check out. Each of them has been a great help pulling together some Mac-based instructions to help you get started:

If you have other favorite libraries you prefer to use, that’s great! Just let us know what you used as part of your contest submission.

Go forth and code!

This challenge starts today, November 5th and goes until November 19th, and we’ll share out the end results by November 22nd.  As you work with Visual Studio for Mac and build out your app, feel free chat with us on Twitter via @VisualStudioMac. Now, go have fun and code!

The post Join the Visual Studio for Mac ASP.NET Core Challenge appeared first on Visual Studio Blog.

New Azure investments deliver unprecedented performance for all your business-critical applications

$
0
0

Technology is being infused into every dimension of our lives, from stadiums to operating theaters to refrigerators to cars, technology is at the center of everything we do. It’s no longer just the unicorns that are digital disruptors. Every business is looking to benefit from technology and increase customer connection, satisfaction, and profitability. Organizations like BP, Lufthansa, and Team Rubicon are optimizing and transforming their businesses with Azure Infrastructure, building new applications to connect customer-service, logistics, and service delivery in novel ways that increase employee productivity and better serve their customers.

This week from Microsoft Ignite, we're highlighting key Azure Infrastructure enhancements that further power our customers’ digital transformation journey.

Increased performance and lower cost for any workload

Azure has the broadest portfolio of compute offerings, ranging from small to the industry’s largest virtual machines (VMs) to purpose-built hardware that is able to support native VMware workloads, enterprise-grade files powered by NetApp, and up to 120 TB SAP scale-out deployments. CONA Services, the service arm for Coca-Cola bottlers, runs a 40 TB mission-critical system on Azure’s purpose-built SAP HANA infrastructure, one of the largest SAP HANA cloud deployments. To complement our compute portfolio, we offer one of the highest performance disks, including one of the fastest disks in the cloud today with Azure ultra disks, delivering up to 160,000 IOPS.

Customers are addressing new, high-performance scenarios that were earlier cost-prohibitive or simply not possible. With our new Azure HB and HC Virtual Machines, Azure is democratizing high-performance computing with unprecedented performance, scalability, and cost-efficiency for large tightly-coupled workloads in the cloud. InfiniBand networking provides the lowest latency and highest bandwidth in the industry and helps power customer workloads up to 23,000 cores for a single MPI-based application, this is 10x higher than what is found anywhere else in the cloud. With HBv2, the first Azure Virtual Machine featuring 200 gigabit InfiniBand, Azure supports workloads up to 80,000 cores per job. 

We are also seeing customers move more Windows Server and Linux workloads to Azure. More than 50 percent of Azure’s compute runs Linux workloads today. When it comes to Windows Server and SQL, 30 percent more enterprises choose Azure over the next major cloud vendor. We offer unparalleled innovation with Azure SQL Managed Instance, App Service and Windows Virtual Desktop along with unmatched security and seamless hybrid capabilities, making Azure the best cloud for Windows and SQL Server workloads. When it comes to performance, Azure SQL Database is the price-performance leader for business-critical workloads while costing up to 86 percent less compared to AWS RDS.

At Microsoft Ignite, we are expanding our compute, storage, and networking offerings to meet an even wider range of customer scenarios. Some highlights include:

  •  General availability of Ea v4 and Eas v4 Azure Virtual Machine-series for memory-intensive workloads and the Da v4 and Das v4 Azure Virtual Machine-series for general purpose applications. These new Azure Virtual Machines are the first in the cloud to feature the latest AMD EPYC™ 7452 processor.
  •  Preview of NVv4 and HBv2 VM-series to support virtual desktop and HPC workloads. These new Azure Virtual Machines feature the latest AMD EPYC™ 7742 processor. NVv4 is designed to be the most cost-effective way to do visualization workloads, supporting VMs with fractional GPUs - as little as 1/8th GPU. NVv4 is Azure’s first visualization-optimized VM to offer AMD RADEON INSTINCT™ GPUs, while HBv2 is Azure’s first HPC VM to offer 200 gigabit InfiniBand networking.
  •  Preview of NDv2 VM-series to support the most demanding machine learning models and distributed AI training workloads. These updated VMs feature eight NVIDIA Tesla V100 NVLINK interconnected GPUs with 32 GB of memory each.
  •  Preview of new, smaller 4, 8 and 16 GB sizes on Premium SSD, Standard SSD and ultra disks to provide a lower cost for customers migrating workloads with less predictable traffic patterns to the cloud.
  •  Preview of the new bursting capabilities on applicable Premium SSD with up to 30x performance for spiky workloads.
  •  Preview of ADLS multi-protocol access which provides core blob features with Azure Data Lake Storage (ADLS) Gen2 including logging, tiering, and event grid integration, enhancing enterprise integration.
  •  Preview of Azure Peering Service which targets customers with an internet-first network strategy for accessing Azure and SaaS services such as Office 365. Through partnering with internet service providers, customers can now take advantage of our global network to enable reliable and optimized internet connectivity to Microsoft services.
  •  General availability of satellite support for Azure ExpressRoute to extend services into hard-to-reach areas critical for many customers across industries.
  •  General availability of Azure Bastion, making Azure the first public cloud to bring this functionality integrated as-a-service into the platform, with fast and super simple deployment of a bastion host to your infrastructure in Azure.

Unmatched security and simplified scalability for any workload

With 54 regions worldwide, we offer more regions than any other cloud provider across six continents. We are continuously investing in Azure to ensure it meets the highest reliability and scalability standards so you can be confident when running your business-critical workloads. When it comes to cloud security, we invest over a billion dollars a year and employ over 3,500 employees focused on security. Just a few weeks ago, we announced the general availability of Azure Sentinel, a built-in cloud-native SIEM that protects your entire enterprise.

This week, we are highlighting some of the enhancements we are making on Azure scalability, reliability, and security:

  •  General availability of Generation 2 Azure Virtual Machines, improving security with the support for Intel Software Guard Extensions (Intel SGX), and the ability to provide large VMs (up to 12TB) and OS Disks sizes that exceed 2TB.
  •  Preview of new features for virtual machine scale sets, for Windows and Linux, that will help you more easily manage VMs while improving runtime and performance capabilities. For example, you can now provision custom VM images at scale using the shared image gallery, while accelerating provisioning times.
  •  Preview of object replication service to support geo-distributed applications with customer-controlled blob replication to different regions.
  •  Enhanced Azure Security Center capabilities including even richer vulnerability assessment for VMs powered by Qualys, support for Kubernetes containers, and integration of security recommendations from partners including Check Point, Tenable and CyberArk available soon.
  •  Azure Sentinel enhancements including connectors for Citrix and ZScaler, investigation tools for suspicious URLs, and enriched detections.
  •  Azure Managed Disks enhanced to provide customers with full control over their compliance needs by enabling server-side encryption with customer-managed keys. This will enable customers to leverage Azure Key Vault and track key usage. This new capability is available in preview for Premium Solid-state drives (SSD), Standard SSD, and Standard hard disk drives (HDD) disk types

Unified hybrid management across all your environments

We are seeing customer IT environments evolve as more workloads move to the cloud and with the rise of edge computing. IT environments are becoming increasingly complex with different types of applications, hardware, multi-cloud, and edge environments, essentially creating an IT resource sprawl. Customers tell us that they are looking for a unified approach to organize, govern, and secure their IT resources wherever they are from a central place, at scale.

At Microsoft Ignite, we are announcing hybrid capabilities to enable cloud innovation anywhere with consistent management across on-premises and multi-could environments. Some of these highlights include:

  •  Preview of Azure Arc, a set of technologies that extend Azure management and enable Azure data services across on-premises, multi-cloud, and edge. Customers now have a central, unified approach to manage and govern Windows and Linux servers, Kubernetes clusters, and Azure data services wherever they are. Azure Arc also extends the adoption of cloud practices like DevOps, Azure Governance, and Azure security across on-premises, multi-cloud, and edge.
  •  General availability of Windows Admin Center version 1910 that delivers powerful hybrid capabilities to manage Windows Servers wherever they run. It streamlines integration of on-premises servers to Azure for disaster recovery, backup, patching, and monitoring, and now includes integration with Azure Security Center. Windows Admin Center also enables customers to use Azure Arc to take advantage of unified hybrid management from Azure.
  •  We are also expanding the Azure Stack portfolio to include Azure Stack Edge. Azure Stack Edge is an Azure managed appliance that brings the compute, storage, and intelligence of Azure at any edge locations. You can manage Azure Stack Edge right from the Azure Portal.

All of these new capabilities can be combined with Azure’s latest developments in application modernization, including our new serverless, container, and functions capabilities.

These are just some of the highlights we’re delivering at Microsoft Ignite this week. We look forward to seeing how our customers integrate these capabilities into their digital transformation journey.


Azure. Invent with purpose.

Azure infrastructure as a service (IaaS) for every workload

$
0
0

This week at Microsoft Ignite, we announced several important additions to our Azure infrastructure as a service (IaaS) portfolio.

Many companies, including GEICO, H&R Block, and CONA Services, rely on Azure to run a very diverse set of business-critical workloads, often requiring dynamic and scalable infrastructure that delivers unparalleled performance.

In order to meet the needs of this diverse and growing set of mission-critical workloads that call Azure home, our infrastructure services continue to evolve to optimize the experience of running these workloads.

Infrastructure for every workload.

Comprehensive infrastructure solutions: Flexibility and choice

We announced several new offerings that expand our portfolio of available virtual machine (VM) instance sizes for general purpose, memory-intensive, and remote visualization scenarios, including the ability to run VMware environments natively and enhancements to the platform that make it even easier to migrate your workloads to Azure.

Ea v4, Eas v4, Da v4, and Das v4 series Microsoft Azure Virtual Machines now available

After being the first global cloud provider to announce the preview of Azure Virtual Machines based on the AMD EPYC™ 7452 processor, we’ve been working together with our technology partners, including AMD, to continue bringing the latest innovation to enterprises. 

This week we’re announcing the availability of the Da v4 and Das v4 Azure Virtual Machine series for general purpose Linux and Windows applications, and the Ea v4 and Eas v4 Azure Virtual Machine series for memory-intensive Linux and Windows workloads.

These new Azure Virtual Machines feature the latest AMD EPYC™ 7452 processor and up to 96 vCPUs, 672 GiBs of RAM, and 2,400 GiBs of SSD-based temporary storage. The Das-series and the Eas-series Virtual Machines support Azure Premium SSDs and will include Ultra Disk support in the near future.

New NVv4 series Azure Virtual Machines preview available

We are also enhancing our compute portfolio for Windows Virtual Desktops and high-performance computing (HPC) workloads with the preview of NVv4. These new Azure Virtual Machines feature the latest AMD EPYC™ 7742 processor and will be the first visualization-optimized Azure Virtual Machine to offer AMD RADEON INSTINCT™ MI25 GPUs. NVv4 (currently in preview) offers enhanced GPU resourcing flexibility, giving customers more choice by offering partitioned GPUs built using industry-standard SR-IOV technology. Customers can select the right size of GPU Virtual Machines with as little as 2GB of dedicated GPU frame buffer for an entry-level desktop in the cloud, and up to the whole GPU with 16GB of frame buffer to provide powerful engineering workstations. This makes entry-level and low-intensity GPU workloads more cost-effective while still giving customers the option to scale up to full-GPU processing power delivered by AMD RADEON INSTINCT™ MI25 GPUs.

Azure VMware Solutions now available in West Europe

We’re also announcing the availability of Azure VMware Solutions in the West Europe Azure region. If you are currently managing an on-premises VMware environment, Azure VMware Solutions delivers the ability to run your VMware environment natively on Azure. This gives you the option to leverage your existing VMware skills and investments while taking full advantage of the scale and automation Azure offers. Azure VMware Solutions is now supported in East US, West US, and West Europe regions.

New Azure Migrate features to streamline migration

Azure Migrate is a central hub for all your migration needs and now delivers new capabilities to accelerate the migration of physical servers and virtual machines. We have also made enhancements to the Server Assessment capabilities that reduce friction through agentless discovery options. And to ensure you have the information you need for migration; we now provide deeper application dependency analysis. Refer to the documentation for more details.

A dynamic and scalable infrastructure for uncompromised performance

One of the most valuable promises of cloud infrastructure is the ability to meet evolving business and IT requirements. In our mission to continuously improve customers’ access to dynamic and scalable infrastructure, we’ve made a couple of important additions to our portfolio.

Azure generation 2 virtual machines now generally available

Generation 2 virtual machines are now generally available on Azure. Generation 2 VMs provide support for Intel Software Guard Extensions (Intel SGX), UEFI boot architecture, and the ability to provision large VMs (up to 12TB) and OS Disks sizes that exceed 2TB.  

Generation 2 VMs are fully supported in the portal, CLI, and PowerShell interfaces, and customers can opt to use them during the provisioning and deployment process, depending on their needs. Please refer to the Windows and Linux documentation for more information.

New Azure Virtual Machine Scale Sets features now in preview

We’re also introducing the preview of new features for Azure Virtual Machine Scale Sets that will greatly simplify the experience of running virtual machines at scale, as well as improve the runtime capabilities and performance of these workloads. Virtual machine scale sets

In addition to supporting a homogeneous set of VMs for a scalable app layer, you can now create an empty virtual machine scale set and add various VMs (even those belonging to different VM series) later during the VM creation process. This will allow you to achieve high availability, for example, by deploying a set of virtual machines to a single availability zone or across different fault domains in an availability zone. You can now use a Virtual Machine Scale Set to deploy a SQL high availability (HA) cluster with high availability in a zone. This will provide the high availability of SQL primary, secondary, and witness VMs in unique fault domains while maintaining the lower inter-VM network latency that is seen within an availability zone.

You can now also provision VMs with custom images using the Azure Shared Image Gallery, which provides a quick, easy and scalable way to share images across different VMs and also accelerates provisioning times.

You can also specify a scale-in policy that gives you control over the order in which VMs should be de-provisioned. Termination notifications now give customers up to 15 minutes to perform any clean-up or other pre-shutdown tasks before VMs are deprovisioned, and you can now use instance protection from scale-in to designate VMs that should not be deprovisioned during a scale-in action. 

All these new features will help you get your applications up and running quickly while giving you additional control over how your applications can scale to meet your requirements. 

HBv2 Azure Virtual Machines for HPC workloads coming soon

HBv2 VMs are designed to deliver supercomputer-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world HPC workloads. HBv2 Virtual Machines support up to 80,000 cores for single MPI jobs to deliver performance that rivals some of the world’s largest and most powerful bare metal supercomputers.

Updated NDv2 Azure Virtual Machines preview

The NDv2-series Virtual Machines, currently in preview, are the latest, fastest, and most powerful addition to the GPU family, specifically designed for the cutting edge demands of distributed HPC, AI, and machine learning workloads. These VMs feature 8 NVIDIA Tesla V100 NVLINK interconnected GPUs with 32 GB of memory each, 40 non-hyperthreaded Intel Xeon Platinum 8168 processor cores, and 672 GiB of system memory. The NDv2-series Virtual Machines (currently in preview) also feature 100 Gb/sec EDR InfiniBand with support for standard Mellanox OFED drivers and all MPI types and versions. With total of 256 GB of GPU memory and 100 Gb/sec InfiniBand interconnect NDv2-series Virtual Machines are ready for the most demanding machine learning models and distributed AI training workloads utilizing CUDA, TensorFlow, Pytorch, Caffe, and other frameworks.

Proximity placement groups now generally available

A proximity placement group is a logical grouping capability for Azure Virtual Machines that you can use to decrease the network latency between a set of virtual machines. When you assign your virtual machines to a proximity placement group, their placement is optimized to deliver lower latency for your latency-sensitive workloads. We’ve seen robust customer adoption of this new feature during the preview over the last few months, and we’re pleased to now make Proximity Placement Groups generally available in most Azure regions. Please check the documentation for more information.

Azure Spot Virtual Machines

Finally, Azure Spot Virtual Machines, which give you access to unused Azure compute capacity at deep discounts, will be available soon. Spot Virtual Machines will be ideal for workloads that can be interrupted, providing scalability while reducing costs. You will be able to take advantage of Spot Virtual Machine pricing for Azure Virtual Machines or Virtual Machine Scale Sets (VMSS) to deploy opportunistic workloads of all sizes. We expect to preview this by early 2020.

In conclusion, there has never been a better time to run your workloads on, or to migrate to, Azure. We hope you enjoy Microsoft Ignite!

Additional Resources


Azure. Invent with purpose.

    Enabling and securing ubiquitous compute from intelligent cloud to intelligent edge

    $
    0
    0

    Enterprises are embracing the cloud to run their mission-critical workloads. The number of connected devices on and off-premises, and the data they generate continue to increase requiring new enterprise network edge architectures. We call this the intelligent edge – compute closer to the data sources and users to reduce latency. The intelligent cloud, with its massive compute power, storage and variety of services works in concert with the intelligent edge using similar programming models to enable innovative scenarios and ubiquitous compute. Networking is the crucial enabler integrating the intelligent cloud with the intelligent edge.

    The Azure Networking mission is to provide the most secure, reliable, and performant network for your workloads, delivered and managed from the intelligent cloud to the intelligent edge. We continue to innovate to help your services connect and extend to the cloud and the edge, be protected, delivered with optimal performance and provide insightful monitoring.

    Microsoft global network

    Microsoft runs one of the world’s largest Wide Area Network (WAN) that serves all Microsoft cloud services including Azure, Dynamics 365, Microsoft 365, LinkedIn, Xbox, and Bing. The WAN connects all Microsoft datacenters running our cloud services together and to our customers and partners through edge sites. These edge sites are strategically located around the world. This is where we exchange traffic with internet service providers for internet traffic and ExpressRoute partners for private connectivity traffic. We also use the Azure Front Door and Azure Content Delivery Network services at our edge sites to enhance and accelerate the experience of our own services, such as Microsoft 365. To provide global coverage the WAN has over 130,000 miles of subsea, terrestrial, and metro optical fiber and is fully managed by Microsoft using internal software defined networking (SDN) technologies to provide the best networking experience. Industry leaders such as Thousand Eyes have reported on the performance of our global network and in a 2018 study found it to be the most robust and most consistent. One fundamental principle in providing a great experience is to get the traffic onto the Microsoft network as close to the customer as possible and keep it on Microsoft’s network as long as possible. All traffic between Microsoft services and datacenters remains fully in Microsoft’s network and does not traverse the internet.

    An image showing the core pillars of Azure Networking.

    Figure 1. Core pillars of Azure Networking

    Connect and extend

    To get the best internet experience, data should enter and exit the Microsoft network as close as possible to you or your users. With over 160 edge sites today, we have an aggressive plan to increase the number of sites, which you can read more about in our edge site expansion blog. We are also increasing the number of ExpressRoute meet-me sites, providing greater flexibility to privately connect to your Azure workloads.

    Staying connected to access and ingest data in today's highly distributed application environments is paramount for any enterprise. Many businesses need to operate in and across highly unpredictable and challenging conditions. For example, energy, farming, mining, and shipping often operate in remote, rural, or other isolated locations with poor network connectivity. ExpressRoute for Satellites is now generally available, enabling access to Microsoft cloud services using satellite connectivity. With commercial satellite constellations becoming widely available, new solution architectures offer improved and affordable performance to access Microsoft.

    MACsec, an industry encryption standard for point to point connections, is now supported on ExpressRoute Direct as a preview ability. ExpressRoute Direct customers can ensure data confidentiality and integrity between physical connections to the ExpressRoute routers to meet security and compliance requirements. Customers fully own and manage the lifecycle of the MACsec keys using Azure Key Vault.

    We have invested in optical technologies to greatly reduce the cost of metro networks. We are passing these savings to you with a new ExpressRoute circuit type called ExpressRoute Local, available via ExpressRoute partners. If you select an ExpressRoute site near our datacenters and only access data from that datacenter then egress prices are included in the ExpressRoute Local circuit price. For connectivity to regions in the same geo you can use ExpressRoute Standard, and to get anywhere in the world you can use ExpressRoute Premium.

    The new peering service for the Microsoft cloud, now in preview, enables enterprise-grade internet connectivity to access Azure, Dynamics 365, and Microsoft 365, via partnerships with internet providers and internet exchange providers. Peering service also provides internet latency telemetry, route monitoring, and alerting against hijacks, leaks, and other border gateway protocol misconfigurations.

    Peering logos - update 2 - MR

    Figure 2. Launch partners supporting the new Peering Service

    We have enhanced our VPN service to support up to 10 Gbps of aggregate encrypted bandwidth, IKE v1 on all our VPN gateway SKUs, and packet capture to help debug configuration issues. We have also enhanced our point-to-site VPN service to support Azure Active Directory and multifactor authentication. We also are making available an OpenVPN client that you can download and run to access your Vnet from anywhere.

    Azure Virtual WAN brings together our Azure connectivity services into a single operational interface with major SD-WAN partners. Azure Virtual WAN enables a global transit network architecture by providing ubiquitous connectivity between globally distributed sets of spokes such as VNets, sites, applications, and users. Significant enhancements include the preview of hub-to-hub and any-to-any connectivity. Virtual WAN users can connect multiple hubs for full mesh connectivity to further simplify their network architecture. Additionally, ExpressRoute and point-to-site are now generally available with Virtual WAN.

    An image showing Azure Virtual WAN full topology overview across customers sites and clients connecting to Azure.

    Figure 3. Azure Virtual WAN full topology overview across customers sites and clients connecting to Azure

    We have been working closely with industry leaders to expand the ecosystem support for Virtual WAN. Today, we are announcing that Cisco and Microsoft are partnering to modernize the network for the cloud. Cisco, one of our largest global and strategic partners, is working with Microsoft to integrate Cisco SD-WAN technology with both Azure Virtual WAN and Office 365 to enable seamless, distributed and optimal branch office connectivity to Azure and Office 365.

    An image of the Cisco logo.

    “At Cisco, we’re helping customers deliver security and application experience as they expand into the cloud. Collaborating with Microsoft to expand the value of Azure Virtual WAN with Cisco SD-WAN, we are creating new opportunities for our mutual customers to accelerate their hybrid cloud strategy.”

    Sachin Gupta, SVP, Product Management for Cisco Enterprise Networking Business

    Additionally, other partners including Cloudgenix, Fortinet, Nokia-Nuage, and Silver Peak, have finalized their integrations with Virtual WAN and are immediately available.

    IPv6

    Dual stack (IPv4 + IPv6) VNet will be generally available later this month. As a first in the cloud, Azure will enable customers to bring their own IPv6 private space into the VNet thereby avoiding any need for routing changes. IPv6 enables customers to address IPv4 depletion, meet regulatory requirements, and expand into the growing mobile and IoT markets with their Azure-based applications.

    An image showing an architectural diagram of an Azure VNet routing with IPv6 between VMs, subnet and Load Balancer.

    Figure 4. Architectural diagram of an Azure VNet routing with IPv6 between VMs, subnet and Load Balancer

    Protect

    Achieving Zero Trust networking

    Cloud applications and the mobile workforce have redefined the security perimeter. The new perimeter isn’t defined by the physical location(s) of the organization, it now extends to every access point that hosts, stores, or accesses corporate resources and services.

    Instead of believing everything behind the corporate firewall is safe, the Zero Trust model assumes breach and verifies each request as though it originates from an uncontrolled network. Regardless of where the request originates or what resource it accesses, Zero Trust teaches us to “never trust, always verify.”

    Azure Networking services provide critical controls to enhance visibility and help prevent bad actors from moving laterally across the network. Networks should be segmented, including deeper software-defined micro-segmentation, and real-time threat protection, end-to-end encryption, monitoring, and analytics should be employed.

    Azure Private Link – extended to all Azure regions

    Azure Private Link brings Azure services into your private virtual network. Supported Azure services such as Storage, SQL Database, and Azure Synapse Analytics can be consumed over a private IP address thereby not opening the access control lists (ACLs) to public internet. The traffic going through Private Link will always be in the Microsoft backbone network and never entering the public internet. The platform as a service (PaaS) resources can also be accessed privately from on-premises through VPN or ExpressRoute private peering thereby keeping the ACLs simple. Starting today, Private Link will be available in all Azure public regions.

    An image showing an architectural diagram of Private Link deployed cross-premises.

    Figure 5. Architectural diagram of Private Link deployed cross-premises

    Using Azure Private Link, Azure is the first cloud to provide data governance and compliance by implementing built-in data exfiltration protection. This brings us one step closer to our goal for zero trust networking wherein malicious actors within the trusted network can’t exfiltrate data to non-secure accounts, since individual PaaS instances instead of service frontends are mapped as private endpoints. Private Link also empowers software as a service (SaaS) providers in Azure to extend the same capability to their customers. Snowflake is an early adopter to the program, with more partner services to follow.

    Azure Firewall Manager is a new security management service that provides central security policy and route management for cloud-based security perimeters. Azure is currently the only cloud provider to offer traffic governance, routing control, and third party integrated security through Azure Firewall and Firewall Manager. Global admins can centrally create hub and spoke architecture and associate security or routing policies with such a hub, referred to as a secured virtual hub.

    A diagram of Azure Firewall Manager deployed inside Secured Virtual WAN Hubs.

    Figure 6. Diagram of Azure Firewall Manager deployed inside Secured Virtual WAN Hubs

    With trusted security partners, you can use your familiar, industry-leading, third-party security as a service (SECaaS) offerings to protect internet access for your users. We are very pleased to announce our partnership with ZScaler, iboss, and Checkpoint (coming soon) as the trusted security partners.

    Azure Firewall threat intelligence-based filtering now general available

    Using threat intelligence-based filtering, Azure firewall can now be configured to alert and deny traffic to and from known malicious IP addresses and domains in near real-time. The IP addresses and domains are sourced from the Microsoft threat intelligence feed.

    We also extended our web application firewall (WAF) with three new features, WAF bot protection, WAF per-site policies, and geo filtering. Azure managed bot protection rule set in Azure Front Door detects different categories of bots and allows customers to set actions accordingly. Customers can block malicious bots at the network edge, allowing good bots to reach application backends, and log or redirect unknown bots to an alternative site. Azure managed bot protection rule set is also offered as a preview on Azure Application Gateway v2 SKU. WAF per site policy with Application Gateway enables customers to specify WAF policies for different web applications hosted on a single Application Gateway. This allows for finer grained security policy and eliminates the need to create additional deployments per site. Azure Application Gateway is introducing geo filters with existing custom rules in preview on v2 SKU. This capability allows you to extend existing IP/IP range based custom rules to also include countries as a matching criterion and take actions accordingly. This allows you to restrict traffic from a given country or only allow traffic from a set of countries.

    We recently announced the general availability of Azure Bastion. The Azure Bastion service is provisioned directly in your Virtual Network, enabling seamless remote desktop (RDP) and secure shell (SSH) access to all virtual machines in the VNet without needing a public IP address. Seamless integration and easy one-time setup of ACLs across your subnets eliminates subsequent and continuous management.

    A diagram showing the Azure Bastion architecture showing SSL access to VNet resources through the Azure portal.

    Figure 7. Azure Bastion architecture showing SSL access to VNet resources through the Azure portal

    Deliver

    Today we are also announcing a new feature, the Content Delivery Network Rules Engine, which allows the Azure Content Delivery Network to enable customers to customize how http requests are handled. Rules Engine enables very powerful match conditions like device detection, HTTP protocol, and header values and trigger appropriate actions. All the http rules run at our edge sites near end users which gives significant performance benefits compared to running rules at customer origins.

    The Application Gateway Ingress Controller allows Azure Application Gateway to be used as the ingress for an Azure Kubernetes Service (AKS.) The ingress controller runs as a pod within the AKS cluster. It consumes Kubernetes Ingress Resources and converts them to an Azure Application Gateway configuration which allows the gateway to load-balance traffic to Kubernetes pods. Using Application Gateway Ingress Controller enables customers to expose a single internet accessible endpoint to communicate with their AKS clusters. Application Gateway directly interacts with pods using private addresses which eliminates the necessity of additional DNAT incurred by Kube-proxy, thus providing more efficient and performant traffic routing to pods. Application Gateway Ingress Controller provides support for all features of Application Gateway including WAF capabilities to secure access to the AKS cluster.

    A diagram showing the App Gateway Ingress controller explained relative to AKS.

    Figure 8. App Gateway Ingress controller explained relative to AKS

    Azure Key Vault is a platform managed service to safeguard cryptographic keys and other secrets used by cloud apps and services. Azure Application Gateway v2 now supports direct integration of Key Vault stored TLS certificates for its HTTPS-enabled listeners. This enables better TLS certificate security by having a clear separation of certificate management process from Application Gateway and backend web application management. Application Gateway polls the Key Vault every few hours for newer version of transport layer security (TLS) certificate, thus enabling automatic renewal of certificates.

    Monitor

    Azure Internet Analyzer is a new client-side measurement service now available in preview. Internet Analyzer enables A/B testing of networking infrastructures and their impact on your customers’ performance experience. Whether you’re migrating apps and content from on-premises to Azure or evaluating a new Azure service, Internet Analyzer allows you to learn from your users’ data and Microsoft’s rich analytics to better understand and optimize your network architecture with Azure before you migrate. Internet Analyzer is designed to address performance-related questions for cloud migration, deploying to new or additional Azure regions, or testing new application and content delivery platforms in Azure, such as Azure Front Door and Content Delivery Network.

    Azure Monitor for Network service is now available in preview. Azure Monitor for Network enables customers to monitor key metrics and health of their network resources, discover issues and get troubleshooting help. Azure Monitor for Network is on by default and doesn’t require any custom setup. Whether it’s about monitoring and troubleshooting the cloud or hybrid networks, Azure Monitor for Network helps you to setup alerts, get resource-specific diagnostics, and visualize the structure and functional dependencies between resources.

    A screenshot showing Azure Monitor and the App Gateway metrics and diagnostics.

    Figure 9. Screenshot of Azure Monitor for Network illustrating App Gateway metrics and diagnostics

    Multi-access Edge Computing (MEC) in preview

    Multi-access Edge Computing offers application developers cloud-computing capabilities at the customer premises. This environment is characterized by very low latency and high bandwidth as well as real-time access to radio networks such as Private LTE and 5G. By integrating MEC capabilities with Azure, we will be offering a continuum of compute and network capabilities from the intelligent cloud to the edge. New critical and immersive scenarios such as smart factory and mixed reality require reliable low-latency and high bandwidth connectivity combined with local compute.

    An image of a concept draft of Multi-access and network edge compute with Azure.

    Figure 10. Concept draft of Multi-access and network edge compute with Azure

    To address these needs, we are introducing a technology preview of Multi-access Edge Compute based on Azure Stack Edge deployed at the customer’s premises for the best possible latency. Key characteristics of the MEC are:

    • Enables developers to use GitHub and Azure dev ops CI/CD toolset to write and run container-based applications at the customer’s premises. With a consistent programming-model it is straightforward to develop applications in Azure and then move them to Azure Stack Edge.
    • Wireless technology integration, including Private Long-Term Evolution (LTE), LTE-based Citizens Broadband Radio Service (CBRS), and forthcoming 5G technologies. As part of our MEC platform, we have partnered with technology innovators to provide mobile virtual network functions (Evolved Packet Core), device integration, SIM management, and radio access networks.
    • MEC is managed from Azure. Curated virtual network function (VNF) images are downloaded from Azure to simplify deploying and running a private mobile network. The platform also provides support for lifecycle management of the VNFs, such as patching, configuration, and monitoring.
    • A partner ecosystem including managed service providers to deploy end to end solutions in your network.

    For those interested in the early technical preview and options with MEC integration, please reach out to MEC-Networking@microsoft.com.

    An image showing an overview of Azure Multi-edge Compute (MEC) partner ecosystem.

    Figure 11. Overview of Azure Multi-edge Compute (MEC) partner ecosystem

    Looking Forward

    We are fully committed to helping you connect to Azure, by protecting your workloads, delivering a great networking experience, and providing extensive monitoring to simplify your deployment and operational costs while helping you better support your customers. At Microsoft Ignite we will add more details about our announcements, and you can learn more by viewing our technical sessions. We’ll continue providing innovative networking services and guidance to help you take full advantage of the cloud. We’re excited to learn about your new scenarios enabled by our networking services. As always, we welcome your feedback.


    Azure. Invent with purpose.


    Introducing C++ Build Insights

    $
    0
    0

    C++ builds should always be faster. In Visual Studio 2019 16.2, we’ve shown our commitment to this ideal by speeding up the linker significantly. Today, we are thrilled to announce a new collection of tools that will give you the power to make improvements of your own. If you’ve ever had time for breakfast while building C++, then you may have asked yourself: what is the compiler doing? C++ Build Insights is our latest take at answering this daunting question and many others. By combining new tools with tried-and-tested Event Tracing for Windows (ETW), we’re making timing information for our C++ toolchain more accessible than ever before. Use it to make decisions tailored to your build scenarios and improve your build times.

    Getting started

    Start by obtaining a copy of Visual Studio 2019 16.4 Preview 3. Then follow the instructions on the Getting started with C++ Build Insights documentation page.

    Capturing timing information for your build boils down to these four steps:

    • Launch an x64 Native Tools Command Prompt for VS 2019 Preview as an administrator.
    • Run: vcperf /start MySessionName
    • Build your project.
    • Run: vcperf /stop MySessionName myTrace.etl

    Choose a name for your session and for your trace file. After executing the stop command, all information pertaining to your build will be stored in the myTrace.etl file.

    ETW allows C++ Build Insights to collect information from all compilers and linkers running on your system. You don’t need to build your project from the same command prompt as the one you are using vcperf from. You can either use a different command prompt, or even Visual Studio.

    Another advantage of ETW is that you don’t need to add any compiler or linker switches to collect information because the operating system activates logging on your behalf. For this reason, C++ Build Insights will work with any build system, no configuration necessary!

    Once you’ve collected your trace, open it in Windows Performance Analyzer (WPA). This application is part of the Windows Performance Toolkit. Details on which version of WPA you need to view C++ Build Insights traces can be found in the Getting started documentation page.

    In the following sections, we show you a glimpse of what you will find after opening your trace in WPA. We hope to convince you to try out these new tools and explore your build!

    At the crux of the matter with the Build Explorer

    Central to understanding a build is having an overview of what happened through time. The Build Explorer is a core element of C++ Build Insights that we designed for this purpose. Drag it from the Graph Explorer panel on the left onto the main analysis window in WPA. Use it to diagnose parallelism issues, detect bottlenecks, and determine if your build is dominated by parsing, code generation, or linking.

     

    Gif showing the tool in action

     

    A thousand scoffs for the thousand cuts: aggregated statistics

    The most insidious of build time problems are the ones that strike you little by little until the damage is so large that you wonder what hit you. A common example is the repetitive parsing of the same header file due to its inclusion in many translation units. Each inclusion may not have a large impact by itself but may have devastating effects on build times when combined with others. C++ Build Insights provides aggregated statistics to assist you in warding off such threats. An example of this capability is illustrated below, where file parsing statistics are aggregated over an entire build to determine the most time-consuming headers. To view this information for your build, choose the Files view from the Graph Explorer panel on the left in WPA. Drag it to the analysis window.

     

    Analysis window showing the number of times a header was included and the accumulative time it took.

     

    What information can you get?

    With C++ Build Insights, expect to be able to obtain the following information for all your compiler and linker invocations.

    For the front-end: Overall front-end time, individual files parsing time. For the back-end: overall back-end time, function optimization time, whole program analysis time, code generation thread time, whole program analysis thread time. For the general compiler:  overall compiler time, command line, inputs/outputs, working directory, tool path. For the general linker: overall linker time, pass 1 time, pass 2 time, link-time code generation time, command line, inputs/outputs, working directory, tool path.

    Tell us what you think!

    Download your copy of Visual Studio 2019 16.4 Preview 3, and get started with C++ Build Insights today!

    In this article, we shared two features with you, but there is much more to come! Stay tuned for more blog posts detailing specific ways in which you can use C++ Build Insights to improve your builds.

    We look forward to hearing from you on how you used vcperf and WPA to understand and optimize your builds. Are there any other pieces of information you would like to be able to get from C++ Build Insights? Tell us what you think in the comments below. You can also get in touch with us at visualcpp@microsoft.com or on Twitter.

    The post Introducing C++ Build Insights appeared first on C++ Team Blog.

    Now available: Azure DevOps Server 2019 Update 1.1 RC

    $
    0
    0

    Today, we are releasing Azure DevOps Server 2019 Update 1.1 RC. This is a go-live release, meaning it is supported on production instances, and you will be able to upgrade to our final release.

    Azure DevOps Server 2019 Update 1.1 includes bug fixes for Azure DevOps Server 2019 Update 1. You can find the details of the fixes in our release notes. You can upgrade to Azure DevOps Server 2019 Update 1.1 from previous versions of Azure DevOps Server 2019 or Team Foundation Server 2012 or later. You can also install Azure DevOps Server 2019 Update 1.1 without first installing Azure DevOps Server 2019.

    Here are some key links:

    We’d love for you to install this release candidate and provide any feedback via Twitter to @AzureDevOps or in our Developer Community.

    The post Now available: Azure DevOps Server 2019 Update 1.1 RC appeared first on Azure DevOps Blog.

    Now available: Azure DevOps Server 2019 Update 1.1 RC

    $
    0
    0

    Today, we are releasing Azure DevOps Server 2019 Update 1.1 RC. This is a go-live release, meaning it is supported on production instances, and you will be able to upgrade to our final release.

    Azure DevOps Server 2019 Update 1.1 includes bug fixes for Azure DevOps Server 2019 Update 1. You can find the details of the fixes in our release notes. You can upgrade to Azure DevOps Server 2019 Update 1.1 from previous versions of Azure DevOps Server 2019 or Team Foundation Server 2012 or later. You can also install Azure DevOps Server 2019 Update 1.1 without first installing Azure DevOps Server 2019.

    Here are some key links:

    We’d love for you to install this release candidate and provide any feedback via Twitter to @AzureDevOps or in our Developer Community.

    The post Now available: Azure DevOps Server 2019 Update 1.1 RC appeared first on Azure DevOps Blog.

    Success in the cloud: Microsoft Cloud Adoption Framework for Azure

    $
    0
    0

    With thousands of customers deploying more and more applications on cloud platforms, cloud technologies have become increasingly more familiar to businesses. However, the path for successful cloud adoption can be bumpy for enterprises as it requires more than the typical technology deployment steps. Successful cloud adoption requires deeper and broader changes across an organization, including business plans and expectations alignment, process updates, and technical readiness.

    In our work with customers, we’ve helped solve some common obstacles to the cloud journey, including proper cloud governance to control costs and ensure security, confusion on the right migration strategy to define a path to the cloud, and a lack of context on how to establish a Cloud Center of Excellence in their organization.

    Today, we are announcing the general availability of new content within the Microsoft Cloud Adoption Framework for Azure, including Innovate and Manage stages and new resources and assessments to help organizations wherever they are. It brings together best practices from Microsoft solution architects, partners, and customers into a comprehensive and curated set of tools, documentation, templates, and guidance that help organizations shape their cloud strategies, driving towards their desired business goals and outcomes.

    Digital transformation is real and is here. We realize change takes time and real effort; it impacts people, culture, and business, and it can feel risky. It requires new disruptive thinking. It requires leaders to adapt, take risks, and learn quickly. It requires a culture and organization shift. And the Cloud Adoption Framework is here to help organizations navigate their respective and unique journeys, delivering on their business goals through the power of Azure.

    How does it work?

    Built with a modular approach, the Cloud Adoption Framework helps organizations breakdown their journey into discrete stages with clear guidance for business decision makers, cloud architects, and IT professionals to undertake their cloud journey with confidence and control, aligning business priorities and expected outcomes with technology changes and investments.

    The six stages to the Microsoft Cloud Adoption Framework for Azure.

    While each organization will have their own cloud journey to adopt the cloud, there are six main stages that hold true for most organizations: strategy, plan, ready, adopt, govern, and manage. Although the framework suggests a linear journey, reality shows it isn’t. It is an iterative and cyclical process, where organizations jump in and out of stages as they make progress or have new areas to address in their journey. If the organization is concerned with managing policies and staying compliant to industry regulations, then focus on establishing proper cloud governance to unblock and address those concerns. If the organization wants to review or define its own motivations for cloud adoption, then they will need to focus on the strategy and planning stages to establish a clear North Star for this change, and so forth.

    Each stage of the framework focuses on specific aspects of the cloud journey, for each organization to address internally. Here is an overview of each stage:

    • Strategy: Understand the motivation to adopt new cloud technologies, considering business and financial justifications, and aligning to business goals and expected outcomes.
    • Plan: Create a cloud adoption plan based on inventory of the current digital estate, prioritized workloads, and a suitable migration strategy for business impact. The definition of a cloud strategy team and center of excellence must be defined at this point to ensure appropriate execution.
    • Ready: Prepare people, business processes, and IT environments for the change, based on a prioritized and agreed cloud adoption plan, leveraging landing zones and replicable mechanisms to enable agility with proper governance and controls.
    • Adopt: Whether looking to migrate existing workloads to the cloud or innovate creating something new, this stage is where the technology implementation takes place to deliver on the business expectations and align to the cloud adoption plan.
    • Govern: Review existing on-premises IT policies and define cloud governance to complement them. Learn to iterate as the cloud estate, business priorities, and processes change over time, potentially creating new risks to mitigate.
    • Manage: Define a cloud operating model based on operational excellence. Monitor, manage, and optimize cloud environments to adapt and deliver on business goals and expected outcomes.

    Making the Cloud Adoption Framework actionable

    Many customers and partners have been leveraging and contributing to this framework for a few months now. Partners, in particular, have found it very useful to help address their customers main blockers to cloud adoption, focusing on both the technical and business components.

    “As a partner, New Signature has used the Microsoft Cloud Adoption Framework to help organize our services and have aligned customer engagements with the themes and goals the framework discusses. It has also been useful to fully identify the end to end capability needed to run both the technical transformation and the business change elements of cloud adoption.” - Sean Morris, Head of Consulting at New Signature.

    And many Microsoft partners have already created offerings to help guide customers through their journey based on the framework. Similarly, OpsCompassleverages the Microsoft Cloud Adoption Framework for Azure to help customers feel safe knowing they’re proactively managing their cost, compliance, and security risks as they adopt the cloud,” said Scott Griffith, Vice President of Corporate Development at OpsCompass.

    Already, over 200 organizations have engaged with the framework, providing feedback, sharing best practices, and also learning new aspects to address open items in their journey. One of those is Dentsu Aegis Network, which wanted to enable teams across the world to leverage the power of Azure, in a controlled and secured matter.

    “Using the Cloud Adoption Framework, we set up an automated self-service portal where anyone can request a cloud landing zone, get approval, and within hours have a new environment provisioned and ready to use in Azure,” said Chris Fry, Director of Global Programs at Dentsu Aegis Network.

    All organizations can start leveraging the Cloud Adoption Framework to support their adoption journey today. Depending on your organization’s needs, there are a few options to get started:

    For more information and to learn more about it, visit the Cloud Adoption Framework for Azure page and for the best practices, guidance, and technical documentation, visit the Microsoft Cloud Adoption Framework for Azure documentation. Learn more about Microsoft migration resources and programs.

     


    Azure. Invent with purpose.

    Serverless for the enterprise with Microsoft Azure

    $
    0
    0

    Cloud computing has opened new paradigms for enterprises to reach higher levels of productivity and scale. At the tip of that spear is serverless computing, enabling developers, teams, and organizations to focus on business logic and leave hosting and scaling of resources to the cloud platform.

    At Microsoft Ignite, we’re announcing serverless functions with no cold start and network isolation, PowerShell support for event-driven automation, simplified secrets management across serverless apps, unified monitoring capabilities, and increased language support—including .NET Core 3 and Python 3.7! These capabilities expand the list of target scenarios that would benefit from event-driven architectures and bring serverless to operations teams.

    Business-critical apps with no cold start and network isolation

    Function as a service (FaaS) platforms present a small delay on their first executions, known as cold start. This makes it challenging to adopt serverless functions for mission critical apps where a few seconds can make a huge difference. To address it, we’re announcing the general availability of the Azure Functions Premium plan.

    It brings together the best of both serverless and dedicated hosting; you can leverage fast, dynamic scale while benefiting from network isolation, consistent performance, and more predictable costs.

    Scale settings for the Azure Functions Premium plan in the Azure portal

    When coupled with our PowerShell support, functions running on the Premium plan are the ultimate tool in the IT administrator’s belt, enabling long running orchestrations with support for executions up to an hour-long and hybrid connections to directly connect to on-premises resources.

    Serverless automation with PowerShell

    Automation and resources management is crucial for any cloud or hybrid solution, helping companies implement and comply with internal policies, reduce costs by turning off cloud resources during idle hours, or meet service-level agreement times. By taking an event-driven approach to building automation workflows, you can benefit from hundreds of built-in Azure connectors to automatically respond to the activity happening not only in Azure services, but also in third party solutions and on-premises resources as well.

    With the general availability of PowerShell support in Azure Functions, you can set up serverless automation processes for infrastructure management and scripting tasks. Managing PowerShell modules is now easier than ever as you can rely on Azure Functions to ensure the latest critical and security updates are automatically installed.

    For more complex tasks you can use Durable Functions, an extension to the Azure Functions runtime that uniquely brings stateful and orchestration capabilities to serverless functions. The new version of Durable Functions not only lets you simplify the orchestration of tasks, but now enables building stateful durable entities. This is especially helpful for scenarios that may require state persistence for a large number of devices (thousands, for example), all within a single serverless function.

    Simplified secrets management

    Security is top-of-mind for every company, and more organizations are adopting secrets management policies to securely store and consume very sensitive information including certificates, connection strings, or passwords. Azure Key Vault provides these capabilities in Azure and supports storing secrets centrally with expectations around expiration and access control.

    Serverless apps and web sites hosted in Azure App Service and Azure Functions can now easily incorporate secrets management without any code changes by including references to Azure Key Vault secrets in their application settings, now in general availability. For existing applications, you can simply replace secrets included in the application settings with their references in Azure Key Vault, and they will continue to operate as normal. Behind the scenes, the application’s system-assigned identity is used to securely fetch the secret and make it available as an environment variable.

    For simplified lifecycle management of your secrets, you can now use Azure Key Vault events on Azure Event Grid (currently in preview) to trigger automation workflows using Azure Functions, WebHooks, or any supported event handlers. By subscribing to changes in the status of keys, certificates or secrets stored in Azure Key Vault (such as about to expire, already expired, or new version available), you can automatically set up notifications or alerts to have the teams in charge perform the required actions.

    Unified monitoring experience

    Monitoring enables development teams to identify errors, bottlenecks, faulty services, and overall performance status across cloud applications. In addition to the existing capabilities for monitoring web applications and serverless functions, both Azure Functions and Azure App Service are now adding integration (currently in preview) with Azure Monitor Logs, sending log telemetry to a single workspace where you can create queries to quickly retrieve, consolidate, and analyze collected data—including using third party services for analysis—or set alert rules.

    If you haven’t already, sign up for an Azure free account and start building serverless applications today! We cannot wait to see the new business-critical apps you'll build using the Azure Functions Premium plan and automation benefits you'll realize using PowerShell support in Azure Functions. Try them out today, and if you have any feedback please reach us on Twitter, GitHub, StackOverflow, and UserVoice.


    Azure. Invent with purpose.

    Viewing all 5971 articles
    Browse latest View live