Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Top Stories from the Microsoft DevOps Community – 2019.05.03

$
0
0

It’s Friday – which means that Microsoft Build 2019 starts Monday! I’m literally writing this on my way to the airport and I couldn’t be more excited to be able to talk to so many passionate, creative developers next week. If you’re there, please stop by the Azure DevOps booth on the expo floor to say hi, and be sure to check out all the DevOps focused breakout sessions. And if you’re not there? Don’t miss everything that’s streaming live because we’ve got so much to show you.

How to edit a YAML Azure DevOps Pipeline
So you’re committed to checking in your build definition as YAML? (You are, right?!) That’s awesome, but sometimes editing YAML can be a bit tricky. There are so many tasks, after all, and so many options. Gian Maria Ricci shows you how much simpler it can be with the Azure Pipelines plug-in for Visual Studio Code.

Creating a Pull Request workflow in Azure DevOps
If you’re just getting started with Azure Pipelines from another CI build provider, you might not realize that you can set up multiple pipelines for one project easily. Andrew Craven explains how to set up a separate pull request validation build from an (after-merge) continuous integration build.

How I Built a Blog
Isaac Levin outlines how to build a highly-available blog out of JAMstack generators and Azure, all deployed with Azure DevOps. (Editor’s note: I think that there’s a Freudian slip in this post, where he deploys to Azure Blob Storage, not Azure Blog Storage. But shouldn’t it be the same?)

Breaking the wall between data scientists and app developers with Azure DevOps
Practices like healthy version control and unit tests are critical for every software project, and I’m thrilled to see my friends in the machine learning space evolve the best practices for their community. Wolfgang Pauli looks at what DevOps means to data science.

As always, if you’ve written an article about Azure DevOps or find some great content about DevOps on Azure then let me know! I’m @ethomson on Twitter.

The post Top Stories from the Microsoft DevOps Community – 2019.05.03 appeared first on Azure DevOps Blog.


New Azure Machine Learning updates simplify and accelerate the ML lifecycle

$
0
0

With the exponential rise of data, we are undergoing a technology transformation, as organizations realize the need for insights driven decisions. Artificial intelligence (AI) and machine learning (ML) technologies can help harness this data to drive real business outcomes across industries. Azure AI and Azure Machine Learning service are leading customers to the world of ubiquitous insights and enabling intelligent applications such as product recommendations in retail, load forecasting in energy production, image processing in healthcare to predictive maintenance in manufacturing and many more.

Microsoft Build 2019 represents a major milestone in the growth and expansion of Azure Machine Learning with new announcements powering the entire machine learning lifecycle.

  • Boost productivity for developers and data scientists across skill levels with integrated zero-code and code-first authoring experiences as well as automated machine learning advancements for building high-quality models easily.
  • Enterprise-grade capabilities to deploy, manage, and monitor models with MLOps (DevOps for machine learning). Hardware accelerated models for unparalleled scale and cost performance and model interpretability for transparency in model predictions. 
  • Open-source capabilities that provide choice and flexibility to customers with MLflow implementation, ONNX runtime support for TensorRT and Intel nGraph, and the new Azure Open Datasets service that delivers curated open data to improve model accuracy.

With these announcements and other improvements being added weekly, Azure Machine Learning continues to help customers easily apply machine learning to grow, compete and meet their objectives.

“By seamlessly integrating Walgreens stores and our other points of care with Microsoft’s Azure AI platform and Azure Machine Learning, the partnership will offer personalized lifestyle, wellness and disease management solutions, available via customers’ delivery method of choice.” 

— Vish Sankaran, Chief Innovation Officer, Walgreens Boots Alliance, Inc.

Boost productivity with simplified machine learning

“Using Azure Machine Learning service, we get peace of mind with automated machine learning, knowing that we are exhausting all the possible scenarios and using the best model for our inputs.”

— Diana Kennedy, Vice President, Strategy, Architecture, and Planning, BP

Automated machine learning advancements

Doubling-down on our mission to simplify AI, the new automated machine learning user interface (Preview), enables business domain experts to train machine learning models on data without writing a single line of code, in just a few clicks. Learn how to run an automated ML experiment in the portal.

Automated machine learining UI

Automated machine learning UI

Feature engineering updates including new featurizers that provide tailor made inputs for any given data, to deliver optimal models. Improvements in sweeping different combinations of algorithms for algorithm selection and hyperparameters and the addition of new popular learners like the XGBoost algorithm and more, that enable greater model accuracy. Compute optimization automatically guides which algorithms to parse out and where to focus, while early termination ensures training runs that deliver models efficiently. Automated machine learning also provides complete transparency into algorithms, so developers and data scientists can manually override and control the process. All these advancements help ensure the best model is delivered.

Building forecasts is an integral part of any business, whether it’s revenue, inventory, sales, or customer demand. Forecasting with automated machine learning includes new capabilities that improve the accuracy and performance of recommended models with time series data, including new predict forecast function, rolling cross validation splits for time series data, configurable lags, window aggregation, and holiday featurizer.

This ensures very high accuracy forecasting models and supporting automation for machine learning across many scenarios.

Azure Machine Learning visual interface (Preview)

The Visual interface is a powerful drag and drop workflow capability that simplifies the process of building, training, and deploying machine learning models. Customers new to machine learning, who prefer a zero-code experience can take advantage of the capabilities similar to those found in Azure Machine Learning Studio inside Azure Machine Learning service. Data preparation, feature engineering, training algorithms, and model evaluation are presented in an intuitive web user experience backed by the scale, version control, and enterprise security of Azure Machine Learning service.

Azure Machine Learning visual interface

Azure Machine Learning visual interface

With this new visual interface, we have started to combine the best of Azure Machine Learning Studio in Azure Machine Learning service. We will continue to share more updates throughout the year as we move from Preview towards General Availability.

Try it out yourself with this tutorial.

Hosted notebooks in Azure Machine Learning (Preview)

The new notebooks VM based authoring is directly integrated into Azure Machine Learning, providing a code-first experience for Python developers to conveniently build and deploy models in the workspace experience. Developers and data scientists can perform every operation supported by the Azure Machine Learning Python SDK using a familiar Jupyter notebook in a secure, enterprise-ready environment.

Hosted notebook VM (preview) in Azure Machine Learning

Hosted notebook VM (Preview) in Azure Machine Learning

Get started quickly and access a notebook directly in Azure Machine Learning, use preconfigured notebooks with no set up required, and fully customize notebook VMs by adding custom packages and drivers.

Enterprise-grade model deployment, management, and monitoring

MLOps - DevOps for machine learning

MLOps (also known as DevOps for Machine Learning) is the practice for collaboration and communication between data scientists and DevOps professionals to help manage the production machine learning lifecycle.

New MLOps capabilities in Azure Machine Learning bring the sophistication of DevOps to data science, with orchestration and management capabilities to enable effective ML Lifecycle management with:

  • Model reproducibility and versioning control to track and manage assets to create the model and sharing of ML pipelines, using environment, code, and data versioning capabilities.
  • Audit trail to ensure asset integrity and provides control logs to help meet regulatory requirements.
  • Packaging and validation for model portability and to certify model performance.
  • Deployment and monitoring support with a simplified experience for debugging, profiling and deploying models, to enable releasing models with confidence and knowing when to retrain.
  • Azure DevOps extension for Machine Learning and the Azure ML CLI to submit experiments from a DevOps Pipeline, track code from Azure Repos or GitHub, trigger release pipelines when an ML model is registered, and automate end-to-end ML deployment workflows using Azure DevOps Pipelines.

Operationalize models effeciently with MLOps

Operationalize models efficiently with MLOps

These capabilities enable customers to bring their machine learning scenarios to production by supporting reproducibility, auditability, and automation of the end-to-end lifecycle and leading to improved model quality over time.

Learn more about MLOps with Azure Machine Learning.

Hardware accelerated models and FPGA on Data Box Edge

In addition to acceleration available with GPUs, now scale from cloud to edge with Azure Machine Learning Hardware Accelerated Models, powered by FPGAs. These Hardware Accelerated Models are now generally available in the cloud, along with a preview of models deployed to Data Box Edge.

FPGA technology supports compute intensive scenarios like deep neural networks (DNNs), that have ushered in breakthroughs in computer vision, without forcing tradeoffs between price and performance. With FPGA’s it is possible to achieve ultra-low latency with ResNet 50, ResNet 152, VGG-16, DenseNet 121, and SSD-VGG. FPGAs enable real-time insights for scenarios like manufacturing defect analysis, satellite imagery, or autonomous video footage to drive business critical decisions.

Learn more about FPGAs and Azure Machine Learning.

Model interpretability

Microsoft is committed to supporting transparency, intelligibility, and explanation in machine learning models. Model interpretability brings us one step closer to understanding the predictions a model makes to ensure fairness and avoid model bias. This deeper understanding of models is key when uncovering insights about the model itself both in order to improve model accuracy during training and to uncover model behaviors and explain model prediction outcomes during inferencing.

Model interpretability is available in Preview and cutting-edge open source technologies (e.g., SHAP, LIME) under a common API, giving data scientists the tools to explain machine learning models globally on all data, or locally on a specific data point in an easy-to-use and scalable fashion.

Learn more about model interpretability.

Open and interoperable platform providing flexibility and choice 

“All the data scientists on our team enjoy using Azure Machine Learning, because it’s fully interoperable with all the other tools they use in their day-to-day work—no extra training is needed, and they get more done faster now.”

— Matthieu Boujonnier, Analytics Application Architect and Data Scientist, Schneider Electric

ONNX Runtime with Azure Machine Learning

Azure Machine Learning service supports ONNX (Open Neural Network Exchange), the open standard for representing machine learning models from TensorFlow, PyTorch, Keras, SciKit-Learn, and many other frameworks. An updated version of ONNX Runtime is now available fully supporting ONNX 1.5 (including object detection models such as YOLOv3 and SSD). With ONNX Runtime, developers now have a consistent scoring API that enables hardware acceleration thanks to the general availability of NVIDIA TensorRT integration and the public preview of Intel nGraph integration. ONNX Runtime is used on millions of Windows devices as part of Windows ML. ONNX Runtime also handles billions of requests in hyperscale Microsoft services such as Office, Bing, and Cognitive Services where an average of two times the performance gains have been seen. An updated version of ONNX Runtime is now available fully supporting the ONNX 1.5 specification, including state of the art object detection models such as Yolov3 and SSD. 

Learn more about ONNX and Azure Machine Learning.

MLflow integration

Azure Machine Learning supports popular open-source frameworks to build highly accurate machine earning models easily, and to enable training to run in variety of environments whether on-prem or in the cloud. Now developers can use MLflow with their Azure Machine Learning workspace to log metrics and artifacts from training runs in a centralized, secure, and scalable location.

Azure Open Datasets (Preview)

Azure Open Datasets is a new service providing curated, open datasets hosted on Azure and easily accessible from Azure Machine Learning. Use these datasets for exploration or combine them with other data to improve the accuracy of machine learning models. Datasets currently provided are historical and forecast weather data form NOAA, and many more will be added over time. Developers and data scientists can also nominate data sets to Azure, to support the global machine learning community with relevant and optimized data. 

Azure Open Datasets

Azure Open Datasets

Learn more about Azure Open Datasets.

Start building experiences

Envisioning, building, and delivering these advancements to the Azure Machine Learning service has been made possible by closely working with our customers and partners. We look forward to helping simplify and accelerate machine learning even further by providing the most open, productive, and easy-to-use machine learning platform. Together, we can shape the next phase of innovation, making AI a reality for your business and enabling breakthrough experiences.

Get started with a free trial of Azure Machine Learning service.

Learn more about the Azure Machine Learning service and follow the quickstarts and tutorials. Explore the service using the Jupyter notebook samples

Read all the Azure AI news from Microsoft Build 2019 .

A deep dive into what’s new with Azure Cognitive Services

$
0
0

This blog post was co-authored by Tina Coll, Senior Product Marketing Manager, Azure Cognitive Services.

Microsoft Build 2019 marks an important milestone for the evolution of Azure Cognitive Services with the introduction of new services and capabilities for developers. Azure empowers developers to make reinforcement learning real for businesses with the launch of Personalizer. Personalizer, along with Anomaly Detector and Content Moderator, is part of the new Decision category of Cognitive Services that provide recommendations to enable informed and efficient decision-making for users.

Available now in preview and general availability (GA):

Preview

Cognitive service APIs:

Container support for businesses AI models at the edge and closer to the data:

Generally available

Cognitive Services span the categories of Vision, Speech, Language, Search, and Decision, offering the most comprehensive portfolio in the market for developers who want to embed the ability to see, hear, translate, decide and more into their apps.  With so much in store, let’s get to it.

Decision: Introducing Personalizer, reinforcement learning for the enterprise

Retail, Media, E-commerce and many other industries have long pursued the holy grail of personalizing the experience. Unfortunately giving customers more of what they want often requires stringing together various CRM, DMP, name-your-acronym platforms and running A/B tests day and night. Reinforcement learning is the set of techniques that allow AI to achieve a goal by learning from what’s happening in the world in real-time. Only Azure delivers this powerful reinforcement-learning based capability through a simple-to-use API with Personalizer.

Within Microsoft, teams are using Personalizer to enhance the user experience. Xbox saw a 40 percent lift in engagement by using Personalizer to display content to users that will most likely interest them.

A diagram that illustrates how Personalizer works to optimize towards business goals.

Speech: In-person meetings just got better with conversation transcription

Conversation transcription, an advanced speech-to-text feature, improves meeting efficiency by transcribing conversations in real-time, enabling all participants to engage fully, capturing who said what when so you can quickly follow up on next steps. Pair conversation transcription with a device integrating the Speech Service Device SDK, now generally available, for higher-quality transcriptions. It also integrates with a variety of meeting conference solutions including Microsoft Teams and other third-party meeting software. Visit the Speech page to see more details.

Example of conversation transcription device and results.

Vision: Unlocking the value of your content – from forms to digital inked notes

Form Recognizer uses advanced machine learning technology to quickly and more accurately extract text and data from business’s forms and documents. With container support, this service can run on-premises and in the cloud. Automate information extraction quickly and tailor to specific content, with only 5 samples, and no manual labeling.

An image showing a document with a chart on the left and the extracted key-value pairs from the document on the right.

Ink Recognizer provides applications with the ability to recognize digital handwriting, common shapes, and the layout of inked documents. Through an API call, you can leverage Ink Recognizer to create experiences that combine the benefits of physical pen and paper with the best of the digital.

A diagram showing the ink stroke input on the left and the recognition tree on the right.

Integrated in Microsoft Office 365 and Windows, Ink Recognizer gives users freedom to create content in a natural way. Ink Recognizer in PowerPoint, converts ideas to professional looking slides in a matter of moments.

An animated GIF showing how Ink Recognizer is used in PowerPoint.

Bringing AI to the edge

In November 2018, we announced the Preview of Cognitive Services in containers that run on-premises, in the cloud or at the edge, an industry first.

A diagram showing the a representation of Cognitive Services on the left, and a representation of the ability to deploy Cognitive Services with containers on the right.

Container support is now available in preview for:

With Cognitive Services in containers, ISVs and enterprises can transform their businesses with edge computing scenarios. Axon, a global leader in connected public safety technologies partnering with more than 17,000 law enforcement agencies in 100+ countries around the world, relies on Cognitive Services in containers for public safety scenarios where the difference of a second in response time matters:

“Microsoft's containers for Cognitive Services allow us to ensure the highest levels of data integrity and compliance for our law enforcement customers while enabling our AI products to perform in situations where network connectivity is limited.”

– Moji Solgi, VP of AI and Machine Learning, Axon

Fortifying the existing Cognitive Services portfolio

In addition to the new Cognitive Services, the following capabilities are generally available:

Neural Text-to-Speech now supports 5 voices and is available in 9 regions to provide customers greater language coverage and support. By changing the styles using Speech Synthesis Markup Language or the voice tuning portal, you can easily refine the voice to express different emotions or speak with different tones for various scenarios. Visit the Text-to-Speech page to “hear” more on the new voices available.

Computer Vision Read operation reads multi-page documents and contains improved capabilities for extracting text from the most common file types including PDF and TIFF.

An image showing the a sample PDF on the left, and the extracted JSON output from using Computer Vision on the right.

In addition, Computer Vision has an improved image tagging model that now understands 10K+ concepts, scenes, and objects and has also expanded the set of recognized celebrities from 200K to 1M. Video Indexer has several enhancements including new AI Editor won a NAB Show Product of the Year Award in the AI/ML category at this year’s event.

Named entity recognition, a capability of Text Analytics, takes free-form text and identifies the occurrences of entities such as people, locations, organizations, and more. Through a API call, named entity recognition uses robust machine learning models to find and categorize more than twenty types of named entities in any text documents. Named entity recognition supports 19 language models available in Preview, with English and Spanish now Generally Available.

Language Understanding (LUIS) now supports multiple intents to help users better comprehend complex and compound sentences.

QnA Maker supports multi-turn dialogs, enhancing its core capability of extracting dialog from PDFs or websites.

Get started today

Today’s milestones illustrate our commitment to bringing the latest innovations in AI to the intelligent cloud and intelligent edge.

To get started building vision and search intelligent apps, visit the Azure Cognitive Services page.

AI-first content understanding, now across more types of content for even more use cases

$
0
0

This post is authored by Elad Ziklik, Principal Program Manager, Applied AI.

Today, data isn’t the barrier to innovation, usable data is. Real-world information is messy and carries valuable knowledge in ways that are not readily usable and require extensive time, resources, and data science expertise to process. With Knowledge Mining, it’s our mission to close the gap between data and knowledge.

We’re making it easier to uncover latent insights across all your content with:

  • Azure Search’s cognitive search capability (general availability)
  • Form Recognizer (preview)

Cognitive search and expansion into new scenarios

Announced at Microsoft Build 2018, Azure Search’s cognitive search capability uniquely helps developers apply a set of composable cognitive skills to extract knowledge from a wide range of content. Deep integration of cognitive skills within Azure Search enables the application of facial recognition, key phrase extraction, sentiment analysis, and other skills to content with a single click. This knowledge is organized and stored in a search index, enabling new experiences for exploring the data.

Cognitive search, now generally available, delivers:

  • Faster performance - Improved throughput capabilities with increased processing speeds up to 30 times faster than in preview. Completing previously hour-long tasks in only a couple of minutes.
  • Support of complex data types - Natively supported to extend the types of data that can be stored and searched (this has been the most requested Azure Search feature.) Raw datasets can include hierarchical or nested substructures that do not break down neatly into a tabular rowset, for example multiple locations and phone numbers for a single customer.
  • New skills - Extended library of pre-built skills based on customer feedback. Improved support for processing images, added ability to create conditional skills, and shaper skills that allow for better control and management of multiple skills in a skillset. Plus, entity recognition provides additional information to each entity identified, such as the Wikipedia URL.
  • Easy implementation - The solution accelerator provides all the resources needed to quickly build a prototype, including templates for deploying Azure resources, a search index, custom skills, a web app, and PowerBI reports. Use the accelerator to jump start development efforts and apply cognitive search to your business needs.

See what’s possible when you apply cognitive search to unstructured content, like art:

Tens of thousands of customers use Azure Search today, processing over 260 billion files each month. Now with cognitive search, millions of enrichments are performed over data ranging from PDFs to Office documents, from JSON files to JPEGs. This is possible because cognitive search reduces the complexity to orchestrate complex enrichment pipelines containing custom and prebuilt skills, resulting in deeper insight of content. Customers across industries including healthcare, legal, media, and manufacturing use this capability to solve business challenges.

“Complex customer needs and difficult markets are our daily business. Cognitive search enables us to augment expert knowledge and experience for reviewing complex technical requirements into an automated solution that empowers knowledge workers throughout our organization.”  Chris van Ravenswaay, Business Solution Manager, Howden

Extending AI-driven content understanding beyond search

Many scenarios outside of search require extracted insights from messy, complicated information. Expanding cognitive search to support unique scenarios, we are excited to announce the preview of the knowledge store capability within cognitive search – allowing access to AI-generated annotations in table and JSON format for application in non-search use cases like PowerBI dashboards, machine learning models, organized data repositories, bots, and other custom applications.

Form Recognizer, a new Cognitive Service

The Form Recognizer Cognitive Service, available in preview, applies advanced machine learning to accurately extract text, key-value pairs, and tables from documents.

With as few as 5 samples, Form Recognizer tailors its understanding to your documents. You can also use the REST interface of the Form Recognizer API to then integrate into cognitive search indexes, automate business processes, and create custom workflows for your business. You can turn forms into usable data at a fraction of the time and cost, so you can focus more time acting on the information rather than compiling it.

Container support for Form Recognizer supports use on the edge, on-premises, and in the cloud. The portable architecture can be deployed directly to Azure Kubernetes Service or Azure Container Instances or to a Kubernetes cluster deployed to Azure Stack.

Organizations like Chevron and Starbucks are using Form Recognizer to accelerate extraction of knowledge from forms and make faster decisions.

We look forward to seeing how you leverage these products to drive impact for your business.

Getting Started

Visual Studio Code C/C++ extension: May 2019 Update

$
0
0

The May 2019 update of the Visual Studio Code C/C++ extension is now available to C/C++ extension Insiders. This release includes many new features – Visual Studio Code Remote Development extensions with C/C++ extension, an IntelliSense Configurations settings editor UI, and IntelliSense improvements. For a full list of this release’s improvements, check out our release notes on GitHub.

You can join the C/C++ extension Insiders program by changing your C_Cpp: Update Channel setting to “Insiders.”

Visual Studio Code Remote Development with the C/C++ extension

Remote Development with Visual Studio Code is now available, and you can use it with the C/C++ extension!

Visual Studio Code Remote Development allows you to use a container, remote machine, or the Windows Subsystem for Linux (WSL) as a full-featured development environment. Visual Studio Code can provide a local-quality development experience including full IntelliSense, debugging, and code editing regardless of where your code is hosted. In fact, you don’t need any source code on your local machine to use this capability.

With Visual Studio Code Remote Development extensions you can:

  • Easily develop your C/C++ programs on the same operating system you are deploying to
  • Sandbox your development environment
  • Use runtimes not available on your local OS
  • Access an existing environment from multiple locations
  • Debug an application running somewhere else.

the local OS of VS Code connects to the Remote OS which can have all your source code

Setting up Visual Studio Code Remote Development

You can install the public preview of the Remote Development extension pack in Visual Studio Code Insiders from the extension marketplace.

More details on getting started with the extensions can be found in the Visual Studio Code Remote Development getting started section. You will see a few new components when you install the Remote Development pack:

New sidebar icon and WSL connection displayed in development bottom bar

Using Visual Studio Code Remote Development with the C/C++ extension

Once you are set up with a Visual Studio Code Remote Development extension, install the C/C++ extension for the Remote Development extension you wish to use. For example, with WSL:

"Install on WSL" option for the C/C++ extension after Remote - WSL is installed

The extension will provide local-quality C/C++ IntelliSense, debugging, and code browsing for the remote environment you’re developing for. In the above case, I now have access to the Linux version of the C/C++ extension.

Keep in mind, you may need to change your compiler path, tasks, or launch.json based on the environment you are remotely targeting. You can follow our GCC on the Windows Subsystem for Linux tutorial for more details on setting up WSL with the C/C++ extension.

IntelliSense Configuration settings editor UI

Users of the C/C++ extension have consistently told us that configuring IntelliSense is difficult, especially editing the c_cpp_properties.json file correctly. To address this pain point, we created a UI editor to help you more easily configure basic IntelliSense settings. The IntelliSense Configuration settings editor UI:

  • makes IntelliSense configuration easier to understand
  • provides a simple and clear interface for the most basic settings to get IntelliSense working
  • validates inputs such as missing paths
  • offers an alternative to editing JSON files (but, you’ll always be able to edit JSON directly if you’d like)

Here is a screenshot of the IntelliSense Configuration settings editor UI:

intellisense configuratoin settings ui shows the basic settings needed to configure intellisense with the C/C++ extension

You can get to the IntelliSense Configuration settings editor UI through the command palette (Ctrl+Shift+P) via the “C/C++: Edit configurations (UI)” command. There are additional entry points including quick fix IntelliSense error links.

Please Note: When you select “Configure” for the first time to configure IntelliSense, VS Code will open the UI editor or JSON file based on your workbench.settings.editor setting. If workbench.settings.editor is set to “ui”, then the UI editor will open by default, and if it is set to “json”, then the JSON file will open by default. You can view that setting under VS Code preferences → settings → “Workbench Settings Editor”.

IntelliSense Improvements

We made a variety of IntelliSense improvements in the May 2019 update.

IntelliSense Configuration

We now validate that the specified compilerPath and intelliSenseMode values match for a better IntelliSense configuration experience in c_cpp_properties.json and the IntelliSense Configurations UI.

#include Errors

The IntelliSense engine fallback setting now defaults to disabled, so the IntelliSense engine will no longer automatically switch to the Tag Parser for translation units containing an #include error.

Error Squiggles

The disabled value for error squiggles no longer shows missing header squiggles.

We now only show (by default) error squiggles if include headers are successfully resolved.

Tell Us What You Think

Download the C/C++ extension for Visual Studio Code, give it a try, and let us know what you think. If you run into any issues, or have any suggestions, please report them on the Issues section of our GitHub repository. Set the C_CppProperties.UpdateChannel in your Visual Studio Code settings to “Insiders” to get early builds of our extension.

We can be reached via the comments below or via email (visualcpp@microsoft.com). You can also find our team (@VisualC) on Twitter – and me (@tara_msft).

The post Visual Studio Code C/C++ extension: May 2019 Update appeared first on C++ Team Blog.

Visual Studio 2019 for Mac version 8.1 Preview 1

$
0
0

Today, we are proud to announce the next major update for Visual Studio for Mac: Visual Studio 2019 for Mac version 8.1 PreviewIn this update, we are offering our new C# editor as the default experience in addition to introducing support for .NET Core 3 Preview and new project templates. We’ve also been working to improve performance and reliability across the board, based on feedback that we’ve heard from the Visual Studio for Mac community.  

You can install this update via the Updater inside Visual Studio for Mac by switching your channel from Stable to Preview. If at any time you would like to switch back to the Stable channel, you can do that via the Updater as well. We always welcome your feedback, so please make sure to share your thoughts with us via Developer Community or from the built-in Report a Problem tool in the IDE. 

A new default editor in VS for Mac: More speed, more reliability  

When we released Visual Studio 2019 for Mac in April, we offered an entirely new C# editor as an opt-in experienceWe wanted to ensure that the new editor meets our standards of performance and reliability before promoting it to the default editor within Visual Studio 2019 for Mac. Additionally, we wanted to ensure no major gaps existed in behavior or functionality between the legacy editor and the new editor. After a lot of testing and many conversations with our community, we believe the editor is now at a point where it can be the default experience.  

As the new editor shares all of its non-UI code with the editor in Visual Studio on Windows, we can now leverage the power of Visual Studio to provide a fast, fluent, and reliable experienceNumerous new features and capabilities were introduced in the new editor in Visual Studio 2019 for Mac, including: 

  • Improved typing responsiveness and scrolling speeds for a more fluid editing experience 
  • Modern editor features such as Multi-caret editing, Word Wrap, and Right-to-Left support 
  • Improved support for accented characters via macOS native input methods
  • An improved IntelliSense UI with faster performance 
  • New quick-action analyzers, shared with Visual Studio on Windows 

Visual Studio for Mac Editor

In this update, we have re-introduced many of your favorite and most requested features of the old editor, such as support for code snippets as well as various formatting and navigation tools, error highlights within the scrollbar and source control tabs. We have also made many improvements to the overall look and feel of the new editor, including refreshed tooltips and signature view adornments. Before we release the final version of Visual Studio 2019 for Mac 8.1, we also plan to add in-line lightbulbs adornments and Format Selection commands. 

Visual Studio for Mac - Editor Snippets

.NET Core 3 Preview Support

Visual Studio 2019 for Mac 8.1 now offers full support of the .NET Core 3 Preview SDK, which means you can get started with the latest and greatest that .NET Core has to offer! You can learn more about the new features and fixes offered in .NET Core 3 through the What’s New documentation.

To get started on using .NET Core 3 Preview within Visual Studio 2019 for Mac, you must first download and install the latest Preview SDK. To do this, download the macOS installer from the .NET Core Download page and run the installer to add .NET Core 3 support to your system.

Once .NET Core 3 Preview is installed, you can create a new .NET Core 3 project simply by using the .NET Core template and selecting .NET Core 3 at the SDK Selection page.

.NET Core 3.0 Selection

As support for .NET Core 3 is still in preview, not all features are currently in place. One such example is support for C# 8, which will be available in a future update of Visual Studio 2019 for Mac.

New ASP.NET Core templates to help build complex web applications

When building web applications today, it’s common to work with a rich client-side JavaScript library like Angular or React. In this release, we’re including four new templates in Visual Studio for Mac, the same templates provided by the dotnet command line tool and Visual Studio on Windows:

These templates provide a starting point with a sample client-side application written using each of the technologies above. The application consumes data provided by an ASP.NET Core API backend. The project files generated by these templates are setup to build the TypeScript and JavaScript assets when you run your application, so that you can stay focused on building your app without leaving the IDE.

We’ve also added a new Razor Class Library template to make it easier to package and reuse your Razor views, pages, controllers, page models, view components, and data models. You can learn more about this in the ASP.NET Core Razor Pages documentation.

Performance and reliability

As we talk to our user community, one theme is clear: performance and reliability need to continue to improve. We have worked to ensure that each release is more reliable and better performing than the last. In this release, we have worked to optimize NuGet restore time, reduce the time it takes to load an existing project, resolved an issue where Visual Studio for Mac would hang on saving files when working with Unity projects, and improved the reliability of the new editor. We have also fixed several crashes and hangs, all of which can be reviewed in our Release Notes.

Please share your thoughts

We encourage you to download and try out the release today! Our aim is to make .NET development on macOS a breeze, and this release is our next step on this journey. Check out our recently updated product roadmap to see what we’re working on next and share your feedback and suggestions! We strive to be 100% driven by customer feedback and we love to hear from you.

The post Visual Studio 2019 for Mac version 8.1 Preview 1 appeared first on The Visual Studio Blog.

Microsoft Edge – All the news from Build 2019

$
0
0

Today kicks off Microsoft Build 2019, and with it, lots of exciting announcements for the next version of Microsoft Edge!

Less than a month ago, we shipped our first Dev and Canary channel preview builds of the next version of Microsoft Edge, built on the Chromium open-source project.  Today, we’re sharing a bit more about how Microsoft Edge will simplify development and improve productivity for our core customer constituencies: consumers, developers, and enterprises.

A first look at new productivity concepts

In Satya’s vision keynote, we previewed a set of new features we’re exploring, designed to make Microsoft Edge users more productive than ever and feel more in control when getting things done on the web.

Collections

We’ve heard a consistent problem from our customers in user studies, interviews, and feedback: The web can be overwhelming. It’s easy to lose track of where you are, and too difficult to turn the chaos of your tabs and windows into actionable information.

Collections is designed to tackle this challenge, using cloud-powered intelligence and an intuitive interface to help you collect, organize, and share content as you travel across the web. Intelligent export to apps like Word and Excel preserves the logical structure of your content, so you can turn a loose collection of paragraphs into a handout with citations, or turn a shopping list into a spreadsheet sortable by price.

Screen recording showing Collections exporting a set of saved cameras to a Word document.

Collections is in its early stages and is not yet available in preview builds. We’d love to hear your feedback on how this experience could be most useful in your browsing. We look forward to sharing more in future preview builds.

Privacy tools

We also previewed an early concept for new privacy tools in Microsoft Edge. We’ve heard from our customers that it’s too hard to understand how your data is being used by sites across the web, and you don’t feel in control of your own data when browsing.

Our privacy dashboard concept allows users to choose from clearly labelled preset levels of information sharing, which will automatically configure the browser to protect users, with options to configure the exposure to third party tracking and the impact to site compatibility.

Screen recording showing the privacy dashboard concept in Microsoft Edge

We’re in the early stages of exploring how best to empower users to be in control of your data and beginning conversations with industry partners and the browser community. We look forward to hearing your feedback on the concepts we shared, and we’re excited to share more in preview builds later on.

Simplifying web development with a consistent platform and tools

If you’re using our Dev or Canary channel preview builds today, you’ve already seen how the new Microsoft Edge provides robust compatibility with the latest web standards, thanks to a platform built on Chromium and rapid updates at the speed of the web – including our Canary channel, which ships on a daily basis. At Build, we’ll dive a bit deeper into how we’re addressing the top developer pain points in the current version of Microsoft Edge.

Our new developer tools are more powerful than ever, built on the Chromium DevTools for a familiar and capable experience. The built-in tools can now inspect and debug any Microsoft Edge-powered web content, whether it’s in the browser, PWAs, or even in a WebView, with a consistent experience across all these targets.

With full support for standards-based PWAs installed directly from the browser, you can bring the full power of the web to the desktop app environment. Because the next version of Microsoft Edge will be cross platform, these experiences will work consistently across Windows and macOS, and will stay up to date with the latest platform capabilities regardless of the underlying operating system version.

For Windows developers, we’re showing a first look at our new Microsoft Edge powered WebView, which brings the fidelity of the Chromium platform to Win32 and UWP Windows apps, allowing for sophisticated hybrid apps that can blend native capabilities with your choice of an always up-to-date or versioned web platform. Interested developers can try our first preview of the Win32 WebView control and give feedback, and stay tuned for more updates the weeks and months ahead.

These changes are just the beginning of our journey for web developers – we’re thrilled to be joining the Chromium community and have already landed over 400 commits into Chromium, improving the experience in all Chromium browsers on Windows and even frameworks like Electron. You can read more about our initial areas of focus in our blog post from earlier this month. We look forward to continuing to improve Chromium for all our customers, and continuing to drive innovation in the web standards community.

Introducing Internet Explorer mode for seamless enterprise compatibility

For our enterprise customers, we announced a new Internet Explorer mode that brings full IE11 compatibility to Microsoft Edge for your internal sites, without compromising the modern web experience on the public internet.

Screen recording showing a legacy site being opened in Internet Explorer mode in Microsoft Edge.

We hear from our customers that most enterprises rely on a multiple-browser solution today, and we hear from our customers and partners that this experience is disjointed and confusing.

IT Pros need to manage multiple browsers and users need to be trained on both, with settings and favorites falling out of sync between the two. In addition, users sometimes get stranded in IE11 after initially opening it for a compatibility scenario, which can result in LOB app developers needing to support IE11 even for newer apps, when they want the most modern capabilities.

The new Internet Explorer mode solves these problems by seamlessly rendering legacy IE-only content in high fidelity inside of Microsoft Edge, without the need to open a separate browser or for the user to change any settings manually. Microsoft Edge uses your existing Enterprise Mode Site List to identify sites which require IE rendering and simply switches to Internet Explorer mode behind the scenes.

We’re excited to share more about Internet Explorer mode, as well as more details on deploying and managing Microsoft Edge, later this year.

Get started today

These features and more will begin to roll out in preview over time as we get closer to the broader launch of the next version of Microsoft Edge. We hope you’re already trying out our preview builds―if not, be sure to download a Dev or Canary channel build for Windows today. We think you’ll love it! If you’re not on Windows 10, don’t worry―we’re looking forward to sharing builds for macOS and previous versions of Windows soon.

To give feedback on this week’s news or share suggestions with the team, head over to the Microsoft Edge Insider community forums, get in touch with us on Twitter, or just use the “Send feedback” option in the Microsoft Edge menu to let us know what you think.

We look forward to hearing from you!

Kyle Pflug, Senior PM Lead, Microsoft Edge Developer Experience

The post Microsoft Edge – All the news from Build 2019 appeared first on Microsoft Edge Blog.

Visual Studio Container Tools Extension (Preview) Announcement

$
0
0

Today we’re excited to announce the preview availability of the new Visual Studio Container Tools Extension (Preview) for Visual Studio 2019. This is an important milestone in the iteration of our container tooling in Visual Studio, as we try to empower developers to work better with their containerized applications directly from within the IDE. The current Visual Studio Tools for Containers provide a great getting started experience for developers building new containerized applications, as well as capabilities to containerize an existing application. The extension tooling, available today, will provide developers additional functionality to help with building and diagnosing containerized applications from right within Visual Studio. 

Prerequisites 

To use the new extension, you’ll need to have the following installed: 

Installation 

You can easily acquire and install the new extension from the Visual Studio Marketplace. 

 Visual Studio Marketplace

Alternatively, you can acquire the extension directly from within Visual Studio using the Extensions -> Manage Extensions menu option. On the Manage Extensions window, select Online from the left and then use the Search text box on the top right-hand corner to search for “Visual Studio Container Tools Extensions”. 

Install Container tools from Visual Studio

What is this new extension? 

The goal of the Container Tools window is to provide a GUI experience within Visual Studio 2019 to aid container developers in building and diagnosing their containerized applications. At a high level, the new tooling provides the following capabilities: 

  • Show a list of containers on your local machine 
  • StartStop, and Remove containers 
  • View containers log (stdout/stderr) – choose to stream logs or not 
  • Search log contents using the standard Visual Studio Find Dialog 
  • Show the folder & files in a running container 
  • Open files from a running container inside Visual Studio 
  • Inspect container port mappings and environment variables 

If you’re used to using the Docker CLI tool to interact with your containersthis window provides a more convenient way to monitor your containers in the IDE and helps you be more productive by not having to switch constantly between your IDE and separate command/terminal windows. 

Container filesystem in Visual Studio

Container Logs in Visual Studio

Note: Containers started (F5 and Ctrl+F5) from Visual Studio will not display logs in this tab, use the Output window instead. 

Please share your thoughts 

We are very excited about our new Container tooling extension and encourage you to download and try out it out today, whether you’re new to containers or an experienced docker developerYou can also check out the Visual Studio documentation for more details. 

Our goal is to make working with containers a great experience in Visual Studio 2019 and we have many other ideas for the Containers window we feel will help in building containerized applications. Wwant to hear from you, and we hope you can share your comments and suggestions on how we can make our tools work better for youDo this by opening a new issue at https://github.com/Microsoft/DockerTools/issues.  

The post Visual Studio Container Tools Extension (Preview) Announcement appeared first on The Visual Studio Blog.


Developing people-centered experiences with Microsoft 365

$
0
0

Today at Microsoft Build 2019, Rajesh Jha and I will have the opportunity to share how developers can connect with customers in new ways and build people-centric experiences using the Microsoft 365 platform. I’ll focus on the two most ubiquitous canvases for developers – Windows and Microsoft Edge.

Windows as a canvas for moving the world forward

With over 800 million active devices on the Windows 10 platform, Windows is the canvas people use when they want to move the world forward. The opportunity is even greater when we consider the 1 billion+ people across work, life, and school using Microsoft 365 services, like Office and Windows combined. When people are at the center of the experience, it frees us to dream about the most optimal experiences for our employees or customers – allowing us to choose the right device with the right capabilities for any given task.

With this in mind, we made the following enhancements to support your innovation:

Example of Ink Recognizer Cognitive Service

  1. The Ink Recognizer Cognitive Service provides accurate recognition of digital ink content made possible by the power of the Cloud. You can now provide your users with consistent experiences wherever they are – Android, iOS, and the Web. The upcoming support for diagram recognition and guided handwriting recognition adds even more options for developers to create unique experiences for their users. You can start using the Ink Recognizer Cognitive Service today.
  2. With XAML Islands you can create people centric experiences and connect your existing WPF, WinForms, or native Win32 codebase to new rich UI. The full release of XAML Islands is included in the Windows 10 May update, and even more UI capabilities will be available later this year as we continue to invest in the open-source WinUI Library.
  3. For writing cross-platform code in JavaScript with a native feel, developers can use an updated, high-performance React Native for Windows implementation to rapidly build native UX components using a React/web skillset. Developers who prefer C# and XAML can of course continue to use Xamarin & Xamarin.Forms for a similar high-performance implementation.

Optimize your full development workflow with Windows and Microsoft Edge

We will take a closer look at the next version of Microsoft Edge, built on the Chromium open source project. Microsoft Edge will provide robust compatibility with the latest web standards across all your devices. It introduces powerful, consistent developer tools to inspect and debug your web content in the browser and in web apps across platforms. We will also announce new features that will simplify life for web developers and IT pros, while embracing the best of the modern web across platforms.

  1. One browser for all web experiences. IE mode allows you to browse all your enterprise sites that target Internet Explorer and the modern web in a single browser. IE mode is coming to preview builds of Microsoft Edge later this year.
  2. Consistent web platform and tools. Developers want a consistent set of powerful tools that work across websites, Microsoft Edge based web apps, and WebViews.
  3. Making the web better for everyone. As a member of the Chromium community, our default position will be to contribute all web platform enhancements back to the project so that we are helping to further evolve web standards.

Beyond just the web, we are committed to optimizing developers’ end to end workflow so that Windows is the best OS for all development tasks. To achieve this, we made the following improvements to Windows and Visual Studio Code to address key requests from the development community:

New Windows Terminal with theming 

New Windows Terminal with theming 

  • A new Windows Terminal application that features a beautiful modern UI with tabs; tear away windows and shortcuts; full Unicode support including East Asian fonts, emojis and ligatures; and support for themes and extensions. A preview of the new Windows Terminal is available now.
  • Windows Subsystem for Linux 2 (WSL 2) is the next version of WSL and is based on a Linux 4.19 kernel shipping in Windows. This same kernel is technology built used for Azure and in both cases helps to reduce Linux boot time and streamline memory use. WSL 2 also improves filesystem I/O performance, Linux compatibility, and can run Docker containers natively so that a VM is no longer needed for containers on Windows. The first WSL 2 preview will be available later this year.
  • The new Visual Studio Code Remote extension enables seamless remote development in the Windows Subsystem for Linux, containers, and virtual machines. This extension brings the best of local development and remote development together – allowing developers to enable scenarios on their local instance of Visual Studio Code. The Remote extension is available today.

Innovating people-centric experiences with you

As a member of and contributor to the developer community, I’m very excited by how Microsoft continues to embrace and expand our participation in the open source community. Working in the open helps us build better dev tools and frameworks because of the continuous feedback. And, you’ve told us that you would like us to continue to decouple many parts of the Universal Windows Platform so that you can adopt them incrementally such as WinUI, MSIX, and Windows Terminal. Allowing you to use our platform and tools to meet you where your customers are going – empowering you to deliver rich, intelligent experiences that put people at the center. We hope you will continue to work with us and give us your feedback. I can’t wait to see what we can build together.

The post Developing people-centered experiences with Microsoft 365 appeared first on Windows Developer Blog.

Announcing the general availability of IntelliCode plus a sneak peek

$
0
0

We’re excited to announce the general availability of Visual Studio IntelliCode and offer a sneak peek at an up-and-coming feature we think you’ll love! With the release of Visual Studio 2019 Version 16.1, IntelliCode will be included with any workload supporting C#, C++, TypeScript/JavaScript, and XAML. However, only the C# and XAML models are currently generally available. C++ and TypeScript/JavaScript remain in preview at this time. We’ve learned so much from all of you in during our year in public preview and are thrilled for this next step. 

If you haven’t heard of IntelliCode,  it’s a set of AI-assisted capabilities that aims to improve developer productivity with features like contextual IntelliSense, code formatting and style rule inference. 

General Availability with Preview Perks 

IntelliCode is growing fast so we’ve also packed in some preview features you can try out if you’d like, no extra installations required. Preview features, such as such as C++ and TypeScript/JavaScript support and argument completion, will be disabled by default but you can easily enable any preview features via Tools > Options > IntelliCode. Check out our updated docs for a full list of preview features. 

A Quick PEEK: finding repeated edits 

Have you ever found yourself making a repeated edit in your code, for instance when you’re refactoring to introduce a new helper function? You might consider creating a regular expression search to find all the places in your code where the change is required – but that seems like a lot of work, so you resign yourself to the tedious and error prone task of going through the code manually. What if an algorithm could track your edits (locally of course), and learn when you were doing something repetitive like that after only a couple of examples? Repeated edit detection does just that, and suggests other places where you need that same change:

This feature is under development right now, and we’re looking to make it available in a future release of IntelliCode.  

Want to hear about new preview features like this first? Sign up to receive regular updates! 

Let us know what you think! 

IntelliCode has benefitted greatly from all the customer feedback we’ve received in the past year and we hope you’ll help us continue to improve by letting us know how IntelliCode is working for you! Feel free to let us know what you’d like to see next by filing feature requests or reporting issues via Developer Community. 

The post Announcing the general availability of IntelliCode plus a sneak peek appeared first on The Visual Studio Blog.

Visual Studio 2019 version 16.1 Preview 3

$
0
0

The third Preview version of Visual Studio 2019 version 16.1 is now available. You can download it from VisualStudio.com. Or, if you’re already on the Preview channel, just click the notification bell from inside your Visual Studio 2019 Preview installation to update. This latest preview contains a range of additions, including IntelliCode support by default, various C++ productivity enhancements, and .NET tooling updates. We’ve highlighted some notable features below, and you can see a list of all the changes in the Preview release notes.
 
 

IntelliCode

Today, at Build 2019, we announced the general availability of IntelliCode, which gives you contextual IntelliSense recommendations powered by a machine learning model trained on thousands of open source repositories. IntelliCode now comes installed by default with any workloads that support C#, C++, TypeScript/JavaScript, or XAML.

AI-assisted IntelliSense recommendations in Visual Studio
AI-assisted IntelliSense recommendations in Visual Studio
 
C# and XAML base models are enabled by default while preview features such as C++, TypeScript/JavaScript, and C# custom model support must be enabled using Tools > Options > IntelliCode. Check out our restructured docs to learn more.
 
 

C++

In Preview 3, you are now able to use your local Windows Subsystem for Linux (WSL) installation with C++ natively in Visual Studio without additional configuration or any SSH connections. In addition, AddressSanitizer is integrated directly into Visual Studio for WSL and Linux projects.

This release also provides the ability to separate your remote build machine from your remote debug machine in both MSBuild and CMake projects. Learn more about the new Linux features in the Visual Studio 2019 version 16.1 Preview 3 Linux roll-up post.

AddressSanitizer integration into Visual Studio
AddressSanitizer integration into Visual Studio
 
Quick Info tooltips, which appear when hovering over a method name, now offer you a link that will search for online docs to learn more about the relevant code construct. For red-squiggled code, the link provided by Quick Info will search for the error online. You will now also see colorized code in these tooltips to reflect their colorization in the editor. Learn more about quick info tooltip improvements in Preview 3 in the quick info improvements post on the C++ Team Blog.

Colorized code and Search Online functionality in Quick Info tooltips
Colorized code and Search Online functionality in Quick Info tooltips
 
Two new C++ Code Analysis quick fixes are available: C6001: using uninitialized memory <variable> and C26494 VAR_USE_BEFORE_INIT. These quick fixes are available via the lightbulb menu on relevant lines and enabled by default in the Microsoft Native Minimum ruleset and C++ Core Check Type rulesets, respectively.

New Code Analysis quick fixes
New Code Analysis quick fixes
 
 

.NET Tooling

You can now experience experimental IntelliSense completion for unimported types. IntelliSense suggestions for types in dependencies will be provided in your project even if you have not yet added the import statement to your file. You can toggle this option on/off by navigating to Tools > Options > Text Editor > C# > IntelliSense.

IntelliSense completion for unimported types
IntelliSense completion for unimported types
 
You can now use a new EditorConfig code style rule to require or prevent using directives inside a namespace. This setting will also be exported when you use the “Generate .editorconfig” button located in Tools > Options > Text Editor > C# > Code Style.

New Editorconfig rule for requiring or preventing usings inside namespaces
New Editorconfig rule for requiring or preventing usings inside namespaces
 
Tools option to prefer usings inside or outside of namespaces
Tools option to prefer usings inside or outside of namespaces
 
 

Use the latest features; give us feedback

To try out this preview of the latest features, update to Visual Studio 2019 version 16.1 Preview 3 online, via the notification bell inside Visual Studio, or by using the Visual Studio Installer.

We continue to value your feedback. As always, let us know of any issues you run into by using the Report a Problem tool in Visual Studio. You can also head over to Visual Studio Developer Community to track your issues, suggest a feature, ask questions, and find answers from others. We use your feedback to continue to improve Visual Studio 2019, so thank you again on behalf of our entire team.

The post Visual Studio 2019 version 16.1 Preview 3 appeared first on The Visual Studio Blog.

Introducing .NET 5

$
0
0

Today, we’re announcing that the next release after .NET Core 3.0 will be .NET 5. This will be the next big release in the .NET family.

There will be just one .NET going forward, and you will be able to use it to target Windows, Linux, macOS, iOS, Android, tvOS, watchOS and WebAssembly and more.

We will introduce new .NET APIs, runtime capabilities and language features as part of .NET 5.

From the inception of the .NET Core project, we’ve added around fifty thousand .NET Framework APIs to the platform. .NET Core 3.0 closes much of the remaining capability gap with .NET Framework 4.8, enabling Windows Forms, WPF and Entity Framework 6. .NET 5 builds on this work, taking .NET Core and the best of Mono to create a single platform that you can use for all your modern .NET code.

We intend to release .NET 5 in November 2020, with the first preview available in the first half of 2020. It will be supported with future updates to Visual Studio 2019, Visual Studio for Mac and Visual Studio Code.

.NET 5 = .NET Core vNext

.NET 5 is the next step forward with .NET Core. The project aims to improve .NET in a few key ways:

  • Produce a single .NET runtime and framework that can be used everywhere and that has uniform runtime behaviors and developer experiences.
  • Expand the capabilities of .NET by taking the best of .NET Core, .NET Framework, Xamarin and Mono.
  • Build that product out of a single code-base that developers (Microsoft and the community) can work on and expand together and that improves all scenarios.

This new project and direction are a game-changer for .NET. With .NET 5, your code and project files will look and feel the same no matter which type of app you’re building. You’ll have access to the same runtime, API and language capabilities with each app. This includes new performance improvements that get committed to corefx, practically daily.

Everything you love about .NET Core will continue to exist:

  • Open source and community-oriented on GitHub.
  • Cross-platform implementation.
  • Support for leveraging platform-specific capabilities, such as Windows Forms and WPF on Windows and the native bindings to each native platform from Xamarin.
  • High performance.
  • Side-by-side installation.
  • Small project files (SDK-style).
  • Capable command-line interface (CLI).
  • Visual Studio, Visual Studio for Mac, and Visual Studio Code integration.

Here’s what will be new:

  • You will have more choice on runtime experiences (more on that below).
  • Java interoperability will be available on all platforms.
  • Objective-C and Swift interoperability will be supported on multiple operating systems.
  • CoreFX will be extended to support static compilation of .NET (ahead-of-time – AOT), smaller footprints and support for more operating systems.

We will ship .NET Core 3.0 this September, .NET 5 in November 2020, and then we intend to ship a major version of .NET once a year, every November:

We’re skipping the version 4 because it would confuse users that are familiar with the .NET Framework, which has been using the 4.x series for a long time. Additionally, we wanted to clearly communicate that .NET 5 is the future for the .NET platform. Calling it .NET 5 makes it the highest version we’ve ever shipped.

We are also taking the opportunity to simplify naming. We thought that if there is only one .NET going forward, we don’t need a clarifying term like “Core”. The shorter name is a simplification and also communicates that .NET 5 has uniform capabilities and behaviors. Feel free to continue to use the “.NET Core” name if you prefer it.

Runtime experiences

Mono is the original cross-platform implementation of .NET. It started out as an open-source alternative to .NET Framework and transitioned to targeting mobile devices as iOS and Android devices became popular. Mono is the runtime used as part of Xamarin.

CoreCLR is the runtime used as part of .NET Core. It has been primarily targeted at supporting cloud applications, including the largest services at Microsoft, and now is also being used for Windows desktop, IoT and machine learning applications.

Taken together, the .NET Core and Mono runtimes have a lot of similarities (they are both .NET runtimes after all) but also valuable unique capabilities. It makes sense to make it possible to pick the runtime experience you want. We’re in the process of making CoreCLR and Mono drop-in replacements for one another. We will make it as simple as a build switch to choose between the different runtime options.

The following sections describe the primary pivots we are planning for .NET 5. They provide a clear view on how we plan to evolve the two runtimes individually, and also together.

High throughput and high productivity

From the very beginning, .NET has relied on a just-in-time compiler (JIT) to translate Intermediate Language (IL) code to optimized machine code. Since that time, we’ve built an industry-leading JIT-based managed runtime that is capable of very high throughput and also enabled developer experiences that make programming fast and easy.

JITs are well suited for long-running cloud and client scenarios. They are able to generate code that targets a specific machine configuration, including specific CPU instructions. A JIT can also re-generate methods at runtime, a technique used to JIT quickly while still having the option to produce a highly-tuned version of the code if this becomes a frequently used method.

Our efforts to make ASP.NET Core run faster on the TechEmpower benchmarks is a good example of the power of JIT and our investments in CoreCLR. Our efforts to harden .NET Core for containers also demonstrates the runtime’s ability to dynamically adapt to constrained environments.

Developer tools are another good example where JIT shines, such as with the dotnet watch tool or edit and continue. Tools often require compiling and loading code multiple times in a single process without restarting and need to do it very quickly.

Developers using .NET Core or .NET Framework have primarily relied on JIT. As a result, this experience should seem familiar.

The default experience for most .NET 5 workloads will be using the JIT-based CoreCLR runtime. The two notable exceptions are iOS and client-side Blazor (web assembly) since both require ahead-of-time (AOT) native compilation.

Fast startup, low footprint, and lower memory usage

The Mono Project has spent much of its effort focused on mobile and gaming consoles. A key capability and outcome of that project is an AOT compiler for .NET, based on the industry-leading LLVM compiler project. The Mono AOT compiler enables .NET code to be built into a single native code executable that can run on a machine, much like C++ code. AOT-compiled apps can run efficiently in small places, and trades throughput for startup if needed.

The Blazor project is already using the Mono AOT. It will be one of the first projects to transition to .NET 5. We are using it as one of the scenarios to prove out this plan.

There are two types of AOT solutions:

  • solutions that require 100% AOT compilation.
  • solutions where most code is AOT-compiled but where a JIT or interpreter is available and used for code patterns that are not friendly to AOT (like generics).

The Mono AOT supports both cases. The first type of AOT is required by Apple for iOS and some game consoles, typically for security reasons. The second is the preferred choice since it offers the benefits of AOT without any of its drawbacks.

.NET Native is the AOT compiler we use for Windows UWP applications and is an example of the first type of AOT listed above. With that particular implementation, we limited the .NET APIs and capabilities that you can use. We learned from that experience that AOT solutions need to cover the full spectrum of .NET APIs and patterns.

AOT compilation will remain required for iOS, web assembly and some game consoles. We will make AOT compilation an option for applications that are more appliance-like, that require fast startup and/or low footprint.

Fundamentals and overlapping experiences

It is critical that we continue to move forward as an overall platform with startup, throughput, memory use, reliability, and diagnostics. At the same time, it also makes sense to focus our efforts. We’ll invest more in throughput and reliability in CoreCLR while we invest more in startup and size reduction with the Mono AOT compiler. We think that these are good pairings. Throughput and reliability go together as do startup and size reduction.

While there are some characteristics where it makes sense to make different investments, there are others that do not.

Diagnostics capabilities need to be the same across .NET 5, for both functional and performance diagnostics. It is also important to support the same chips and operating systems (with the exception of iOS and web assembly).

We will continue to optimize .NET 5 for each workload and scenario, for whatever makes sense. There will be even greater emphasis on optimizations, particular where multiple workloads have overlapping needs.

All .NET 5 applications will use the CoreFX framework. We will ensure that CoreFX works well in the places it is not used today, which is primarily the Xamarin and client-side Blazor workloads.
All .NET 5 applications will be buildable with the .NET CLI, ensuring that you have common command-line tooling across projects.

C# will move forward in lock-step with .NET 5. Developers writing .NET 5 apps will have access to the latest C# version and features.

The birth of the project

We met as a technical team in December 2018 in Boston to kick off this project. Design leaders from .NET teams (Mono/Xamarin and .NET Core) and also from Unity presented on various technical capabilities and architectural direction.

We are now moving forward on this project as a single team with one set of deliverables. Since December, we have made a lot of progress on a few projects:

  • Defined a minimal layer that defines the runtime <-> managed code layer, with the goal making >99% of CoreFX common code.
  • MonoVM can now use CoreFX and its class libraries.
  • Run all CoreFX tests on MonoVM using the CoreFX implementation.
  • Run ASP.NET Core 3.0 apps with MonoVM.
  • Run MonoDevelop and then Visual Studio for Mac on CoreCLR.

Moving to a single .NET implementation raises important questions. What will the target framework be? Will NuGet package compatibility rules be the same? Which workloads should be supported out-of-the-box by the .NET 5 SDK? How does writing code for a specific architecture work? Do we still need .NET Standard? We are working through these issues now and will soon be sharing design docs for you to read and give feedback on.

Closing

The .NET 5 project is an important and exciting new direction for .NET. You will see .NET become simpler but also have broader and more expansive capability and utility. All new development and feature capabilities will be part of .NET 5, including new C# versions.

We see a bright future ahead in which you can use the same .NET APIs and languages to target a broad range of application types, operating systems, and chip architectures. It will be easy to make changes to your build configuration to build your applications differently, in Visual Studio, Visual Studio for Mac, Visual Studio Code, Azure DevOps or at the command line.

See: .NET 5 on Hacker News

The post Introducing .NET 5 appeared first on .NET Blog.

ASP.NET Core updates in .NET Core 3.0 Preview 5

$
0
0

ASP.NET Core updates in .NET Core 3.0 Preview 5

.NET Core 3.0 Preview 5 is now available. This iteration was brief for the team and primarily includes bug fixes and improvements to the more significant updates in Preview 4. This post summarizes the important points in this release.

Please see the release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.0 Preview 5 install the .NET Core 3.0 Preview 5 SDK. If you’re on Windows using Visual Studio, you also need to install the latest preview of Visual Studio.

Upgrade an existing project

To upgrade an existing an ASP.NET Core app (including Blazor apps) to .NET Core 3.0 Preview 5, follow the migrations steps in the ASP.NET Core docs. Please also see the full list of breaking changes in ASP.NET Core 3.0.

To upgrade an existing ASP.NET Core 3.0 Preview 4 project to Preview 5:

  • Update Microsoft.AspNetCore.* package references to 3.0.0-preview5-19227-01
  • Update Microsoft.Extensions.* package references to 3.0.0-preview5.19227.01

That’s it! You should be good to go with this latest preview release.

New JSON Serialization

In 3.0-preview5, ASP.NET Core MVC adds supports for reading and writing JSON using System.Text.Json. The System.Text.Json serializer can read and write JSON asynchronously, and is optimized for UTF-8 text making it ideal for REST APIs and backend applications.

This is available for you to try out in Preview 5, but is not yet the default in the templates. You can use the new serializer by removing the call to add Newtonsoft.Json formatters:

public void ConfigureServices(IServiceCollection services)
{
    ...
    services.AddControllers()
           .AddNewtonsoftJson()
    ...
}

In the future this will be default for all new ASP.NET Core applications. We hope that you will try it in these earlier previews and log any issues you find here.

We used this WeatherForecast model when we profiled JSON read/writer performance using Newtonsoft.Json, our previous serializer.

public class WeatherForecast
{
    public DateTime Date { get; set; }

    public int TemperatureC { get; set; }

    public string Summary { get; set; }
}

JSON deserialization (input)

Description RPS CPU (%) Memory (MB)
Newtonsoft.Json – 500 bytes 136,435 95 172
System.Text.Json – 500 bytes 167,861 94 169
Newtonsoft.Json – 2.4 kbytes 97,137 97 174
System.Text.Json – 2.4 kbytes 132,026 96 169
Newtonsoft.Json – 40 kbytes 7,712 88 212
System.Text.Json – 40 kbytes 16,625 96 193

JSON serialization (output)

Description RPS CPU (%) Memory (MB)
Newtonsoft.Json – 500 bytes 120,273 94 174
System.Text.Json – 500 bytes 145,631 94 173
Newtonsoft.Json – 8 Kbytes 35,408 98 187
System.Text.Json – 8 Kbytes 56,424 97 184
Newtonsoft.Json – 40 Kbytes 8,416 99 202
System.Text.Json – 40 Kbytes 14,848 98 197

For the most common payload sizes, System.Text.Json offers about 20% throughput increase during input and output formatting with a smaller memory footprint.

Options for the serializer can be configured using MvcOptions:

services.AddControllers(options => options.SerializerOptions.WriteIndented = true) 

Integration with SignalR

System.Text.Json is now the default Hub Protocol used by SignalR clients and servers starting in ASP.NET Core 3.0-preview5. Please try it out and file issues if you find anything not working as expected.

Switching back to Newtonsoft.Json

If you would like to switch back to the previous default of using Newtonsoft.Json then you can do so on both the client and server.

  1. Install the Microsoft.AspNetCore.SignalR.Protocols.NewtonsoftJson NuGet package.
  2. On the client add .AddNewtonsoftJsonProtocol() to the HubConnectionBuilder:

    new HubConnectionBuilder()
    .WithUrl("/chatHub")
    .AddNewtonsoftJsonProtocol()
    .Build();
  3. On the server add .AddNewtonsoftJsonProtocol() to the AddSignalR() call:

    services.AddSignalR()
    .AddNewtonsoftJsonProtocol();

Give feedback

We hope you enjoy the new features in this preview release of ASP.NET Core! Please let us know what you think by filing issues on Github.

The post ASP.NET Core updates in .NET Core 3.0 Preview 5 appeared first on ASP.NET Blog.

Announcing .NET Core 3.0 Preview 5

$
0
0

Today, we are announcing .NET Core 3.0 Preview 5. It includes a new Json serializer, support for publishing single file executables, an update to runtime roll-forward, and changes in the BCL. If you missed it, check out the improvements we released in .NET Core 3.0 Preview 4, from last month.

Download .NET Core 3.0 Preview 5 right now on Windows, macOS and Linux.

ASP.NET Core and EF Core are also releasing updates today.

WPF and Windows Forms Update

You should see a startup performance improvement for WPF and Windows Forms. WPF and Windows Forms assemblies are now ahead-of-time compiled, with crossgen. We have seen multiple reports from the community that startup performance is significantly improved between Preview 4 and Preview 5.

We published more code for WPF as part of .NET Core 3.0 Preview 4. We expect to complete publishing WPF by Preview 7.

Publishing Single EXEs

You can now publish a single-file executable with dotnet publish. This form of single EXE is effectively a self-extracting executable. It contains all dependencies, including native dependencies, as resources. At startup, it copies all dependencies to a temp directory, and loads them for there. It only needs to unpack dependencies once. After that, startup is fast, without any penalty.

You can enable this publishing option by adding the PublishSingleFile property to your project file or by adding a new switch on the commandline.

To produce a self-contained single EXE application, in this case for 64-bit Windows:

dotnet publish -r win10-x64 /p:PublishSingleFile=true

Single EXE applications must be architecture specific. As a result, a runtime identifier must be specified.

See Single file bundler for more information.

Assembly trimmer, ahead-of-time compilation (via crossgen) and single file bundling are all new features in .NET Core 3.0 that can be used together or separately. Expect to hear more about these three features in future previews.

We expect that some of you will prefer single exe provided by an ahead-of-time compiler, as opposed to the self-extracting-executable approach that we are providing in .NET Core 3.0. The ahead-of-time compiler approach will be provided as part of the .NET 5 release.

Introducing the JSON Serializer (and an update to the writer)

JSON Serializer

The new JSON serializer layers on top of the high-performance Utf8JsonReader and Utf8JsonWriter. It deserializes objects from JSON and serializes objects to JSON. Memory allocations are kept minimal and includes support for reading and writing JSON with Stream asynchronously.

To get started, use the JsonSerializer class in the System.Text.Json.Serialization namespace. See the documentation for information and samples. The feature set is currently being extended for future previews.

Utf8JsonWriter Design Change

Based on feedback around usability and reliability, we made a design change to the Utf8JsonWriter that was added in preview2. The writer is now a regular class, rather than a ref struct, and implements IDisposable. This allows us to add support for writing to streams directly. Furthermore, we removed JsonWriterState and now the JsonWriterOptions need to be passed-in directly to the Utf8JsonWriter, which maintains its own state. To help offset the allocation, the Utf8JsonWriter has a new Reset API that lets you reset its state and re-use the writer. We also added a built-in IBufferWriter<T> implementation called ArrayBufferWriter<T> that can be used with the Utf8JsonWriter. Here’s a code snippet that highlights the writer changes:

// New, built-in IBufferWriter<byte> that's backed by a grow-able array
var arrayBufferWriter = new ArrayBufferWriter<byte>();

// Utf8JsonWriter is now IDisposable
using (var writer = new Utf8JsonWriter(arrayBufferWriter, new JsonWriterOptions { Indented = true }))
{

   // Write some JSON using existing WriteX() APIs.

   writer.Flush(); // There is no isFinalBlock bool parameter anymore
}

You can read more about the design change here.

Index and Range

In the previous preview, the framework supported Index and Range by providing overloads of common operations, such as indexers and methods like Substring, that accepted Index and Range values. Based on feedback of early adopters, we decided to simplify this by letting the compiler call the existing indexers instead. The Index and Range Changes document has more details on how this works but the basic idea is that the compiler is able to call an int based indexer by extracting the offset from the given Index value. This means that indexing using Index will now work on all types that provide an indexer and have a Count or Length property. For Range, the compiler usually cannot use an existing indexer because those only return singular values. However, the compiler will now allow indexing using Range when the type either provides an indexer that accepts Range or if there is a method called Slice. This enables you to make indexing using Range also work on interfaces and types you don’t control by providing an extension method.

Existing code that uses these indexers will continue to compile and work as expected, as demonstrated by the following code.

string s = "0123456789";
char lastChar = s[^1]; // lastChar = '9'
string startFromIndex2 = s[2..]; // startFromIndex2 = "23456789"

The following String methods have been removed:

public String Substring(Index startIndex);
public String Substring(Range range);

Any code uses that uses these String methods will need to be updated to use the indexers instead

string substring = s[^10..]; // Replaces s.Substring(^10);
string substring = s[2..8];   // Replaces s.Substring(2..8);

The following Range method previously returned OffsetAndLength:

public Range.OffsetAndLength GetOffsetAndLength(int length);

It will now simply return a tuple instead:

public ValueTuple<int, int> GetOffsetAndLength(int length);

The following code sample will continue to compile and run as before:

(int offset, int length) = range.GetOffsetAndLength(20);

New Japanese Era (Reiwa)

On May 1st, 2019, Japan started a new era called Reiwa. Software that has support for Japanese calendars, like .NET Core, must be updated to accommodate Reiwa. .NET Core and .NET Framework have been updated and correctly handle Japanese date formatting and parsing with the new era.

.NET relies on operating system or other updates to correctly process Reiwa dates. If you or your customers are using Windows, download the latest updates for your Windows version. If running macOS or Linux, download and install ICU version 64.2, which has support the new Japanese era.

Handling a new era in the Japanese calendar in .NET blog has more information about the changes done in the .NET to support the new Japanese era.

Hardware Intrinsic API changes

The Avx2.ConvertToVector256* methods were changed to return a signed, rather than unsigned type. This puts them inline with the Sse41.ConvertToVector128* methods and the corresponding native intrinsics. As an example, Vector256<ushort> ConvertToVector256UInt16(Vector128<byte>) is now Vector256<short> ConvertToVector256Int16(Vector128<byte>).

The Sse41/Avx.ConvertToVector128/256* methods were split into those that take a Vector128/256<T> and those that take a T*. As an example, ConvertToVector256Int16(Vector128<byte>) now also has a ConvertToVector256Int16(byte*) overload. This was done because the underlying instruction which takes an address does a partial vector read (rather than a full vector read or a scalar read). This meant we were not able to always emit the optimal instruction coding when the user had to do a read from memory. This split allows the user to explicitly select the addressing form of the instruction when needed (such as when you don’t already have a Vector128<T>).

The FloatComparisonMode enum entries and the Sse/Sse2.Compare methods were renamed to clarify that the operation is ordered/unordered and not the inputs. They were also reordered to be more consistent across the SSE and AVX implementations. An example is that Sse.CompareEqualOrderedScalar is now Sse.CompareScalarOrderedEqual. Likewise, for the AVX versions, Avx.CompareScalar(left, right, FloatComparisonMode.OrderedEqualNonSignalling) is now Avx.CompareScalar(left, right, FloatComparisonMode.EqualOrderedNonSignalling).

.NET Core runtime roll-forward policy update

The .NET Core runtime, actually the runtime binder, now enables major-version roll-forward as an opt-in policy. The runtime binder already enables roll-forward on patch and minor versions as a default policy. We never intend to enable major-version roll-forward as a default policy, however, it is an important for some scenarios.

We also believe that it is important to expose a comprehensive set of runtime binding configuration options to give you the control you need.

There is a new know called RollForward, which accepts the following values:

  • LatestPatch — Roll forward to the highest patch version. This disables minor version roll forward.
  • Minor — Roll forward to the lowest higher minor version, if requested minor version is missing. If the requested minor version is present, then the LatestPatch policy is used. This is the default policy.
  • Major — Roll forward to lowest higher major version, and lowest minor version, if requested major version is missing. If the requested major version is present, then the Minor policy is used.
  • LatestMinor — Roll forward to highest minor version, even if requested minor version is present.
  • LatestMajor — Roll forward to highest major and highest minor version, even if requested major is present.
  • Disable — Do not roll forward. Only bind to specified version. This policy is not recommended for general use since it disable the ability to roll-forward to the latest patches. It is only recommended for testing.

See Runtime Binding Behavior and dotnet/core-setup #5691 for more information.

Making.NET Core runtime docker images for Linux smaller

We reduced the size of the runtime by about 10 MB by using a feature we call “partial crossgen”.

By default, when we ahead-of-time compile an assembly, we compile all methods. These native compiled methods increase the size of an assembly, sometimes by a lot (the cost is quite variable). In many cases, a subset, sometimes a small subset, of methods are used at startup. That means that cost and benefit and can be asymmetric. Partial crossgen enables us to pre-compile only the methods that matter.

To enable this outcome, we run several .NET Core applications and collect data about which methods are called. We call this process “training”. The training data is called “IBC”, and is used as an input to crossgen to determine which methods to compile.

This process is only useful if we train the product with representative applications. Otherwise, it can hurt startup. At present, we are targeting making Docker container images for Linux smaller. As a result, it’s only the .NET Core runtime build for Linux that is smaller and where we used partial crossgen. That enables us to train .NET Core with a smaller set of applications, because the scenario is relatively narrow. Our training has been focused on the .NET Core SDK (for example, running dotnet build and dotnet test), ASP.NET Core applications and PowerShell.

We will likely expand the use of partial crossgen in future releases.

Docker Updates

We now support Alpine ARM64 runtime images. We also switched the default Linux image to Debian 10 / Buster. Debian 10 has not been released yet. We are betting that it will be released before .NET Core 3.0.

We added support for Ubuntu 19.04 / Disco. We don’t usually add support for Ubuntu non-LTS releases. We added support for 19.04 as part of our process of being ready for Ubuntu 20.04, the next LTS release. We intend to add support for 19.10 when it is released.

We posted an update last week about using .NET Core and Docker together. These improvements are covered in more detail in that post.

AssemblyLoadContext Updates

We are continuing to improve AssemblyLoadContext. We aim to make simple plug-in models to work without much effort (or code) on your part, and to enable complex plug-in models to be possible. In Preview 5, we enabled implicit type and assembly loading via Type.GetType when the caller is not the application, like a serializer, for example.

See the AssemblyLoadContext.CurrentContextualReflectionContext design document for more information.

COM-callable managed components

You can now create COM-callable managed components, on Windows. This capability is critical to use .NET Core with COM add-in models, and also to provide parity with .NET Framework.

With .NET Framework, we used mscoree.dll as the COM server. With .NET Core, we provide a native launcher dll that gets added to the component bin directory when you build your COM component.

See COM Server Demo to try out this new capability.

GC Large page support

Large Pages (also known as Huge Pages on Linux) is a feature where the operating system is able to establish memory regions larger than the native page size (often 4K) to improve performance of the application requesting these large pages.

When a virtual-to-physical address translation occurs, a cache called the Translation lookaside buffer (TLB) is first consulted (often in parallel) to check if a physical translation for the virtual address being accessed is available to avoid doing a page-table walk which can be expensive. Each large-page translation uses a single translation buffer inside the CPU. The size of this buffer is typically three orders of magnitude larger than the native page size; this increases the efficiency of the translation buffer, which can increase performance for frequently accessed memory.

The GC can now be configured with the GCLargePages as an opt-in feature to choose to allocate large pages on Windows. Using large pages reduces TLB misses therefore can potentially increase application performance. It does, however, come with some limitations.

Closing

Thanks for trying out .NET Core 3.0. Please continue to give us feedback, either in the comments or on GitHub. We actively looking for reports and will continue to make changes based on your feedback.

Take a look at the .NET Core 3.0 Preview 1, Preview 2, Preview 3 and Preview 4 posts if you missed those. With this post, they describe the complete set of new capabilities that have been added so far with the .NET Core 3.0 release.

The post Announcing .NET Core 3.0 Preview 5 appeared first on .NET Blog.

C++ with Visual Studio 2019 and Windows Subsystem for Linux (WSL)

$
0
0

In Visual Studio 2019 version 16.1 Preview 3 we have added native support for using C++ with the Windows Subsystem for Linux (WSL). WSL lets you run a lightweight Linux environment directly on Windows, including most command-line tools, utilities, and applications. In Visual Studio you no longer need to add a remote connection or configure SSH in order to build and debug on your local WSL installation. This will save you time getting up and running in a Linux environment and eliminates the need to copy and maintain sources on a remote machine.

In this blog post, we’ll first look at how to set up WSL. We will then walk-through how to use it with a CMake project and MSBuild-based Linux project. If you are just getting started with our native support for CMake, be sure to check out our CMake Support in Visual Studio introductory page too.

Setting up WSL

You can find details on how to install WSL here, but the easiest way is to download your distro of choice (Ubuntu, Debian, etc.) from the Windows Store.

To configure your WSL installation to work with Visual Studio you need the following tools installed: gcc, gdb, make, rsync, and zip. You can install them on distros that use apt with this command:

sudo apt install g++ gdb make rsync zip

The inclusion of rsync and zip allows Visual Studio to extract header files from your WSL instance to the Windows filesystem to use for IntelliSense. Due to the lack of visibility into the root file system of WSL from Windows, a local rsync copy is done inside WSL to copy the headers to a Windows visible location. This is a one-time operation that Visual Studio performs to configure IntelliSense for Linux connections.

Visual Studio CMake projects and WSL

Let’s start by looking at a simple CMake project.

1. Start Visual Studio 2019 (version 16.1 or later) and create a new CMake project using the “CMake Project” template or open an existing one.

2. Navigate to the configuration drop-down menu and select “Manage Configurations…” This will open the CMake Settings Editor.

"Manage Configurations..."
3. Visual Studio creates an x64-Debug or x86-Debug configuration by default. You can add a new WSL configuration by clicking on the green plus sign above the configuration manager on the left-hand side of the editor.

Add a new connection
4. Select a WSL-Debug configuration.

Select a WSL-Debug configuration
5. By default, Visual Studio will pick up on your default WSL configuration. If you have side-by-side installations of WSL, then you can specify which WSL executable Visual Studio should use by setting the “Path to WSL executable” property under the “General” section of the CMake Settings editor.

6. Save the editor (Ctrl + S) and select your WSL-Debug configuration as your active configuration using the configuration drop-down menu at the top of the page. This will start the cache generation of your WSL configuration.

Select WSL configuration
7. If you don’t have CMake on your WSL installation, then you will be prompted to automatically deploy a recent version of CMake from Visual Studio. If you are missing any other dependencies (gcc, gdb, make, rsync, zip) then see above for Setting up WSL.

8. In the Solution Explorer expand the project subfolder and in the .cpp file set a breakpoint in

main()
 .

9. In the launch bar change the launch target from “Current Document” to your project name.

10. Now click “Start” (Debug > Start) or press F5. Your project will build, the executable will launch, and you will hit your breakpoint. You can see the output of your program (in this case, “Hello CMake”) in the Linux Console Window.

Debug program to see "Hello CMake"

 

Visual Studio MSBuild-based projects and WSL

We also support using WSL from our MSBuild-based Linux projects in Visual Studio. Here are the steps for getting started.

1. Create a new Linux Console Application (you can filter platform by Linux and look for “Console App”) in Visual Studio 2019 version 16.1 or later or open an existing one.

2. Right-click on the project in the Solution Explorer and select “Properties” to open the Project Property Pages.

3. In the dialog that opens you will see the “General” property page. On this page, there is an option for “Platform Toolset.” Change this from “Remote_GCC_1_0” to “WSL_1_0.”

4. By default, Visual Studio will target the WSL installation that is set as the default through wslconfig. If you have side-by-side installations of WSL, then you can specify which WSL executable Visual Studio should use by setting the “WSL *.exe full path” option directly below “Platform Toolset.” Press OK.

5. Set a breakpoint in main.cpp and click “Start Debugging” (Debug > Start Debugging). If you are missing any dependencies on your WSL installation (gcc, gdb, make, rsync, zip) then see above for Setting up WSL. You can see the output of your program in the Linux Console Window.

Give us your feedback!

If you have feedback on WSL or anything regarding our Linux support in Visual Studio, we would love to hear from you. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter other problems with Visual Studio or MSVC or have a suggestion, you can use the Report a Problem tool in Visual Studio or head over to Visual Studio Developer Community. You can also find us on Twitter (@VisualC) and (@erikasweet_).


Coming soon…

More announcements about AddressSanitizer integration for Linux projects, Code Analysis quick fixes, and Quick Info improvements are coming to the C++ Team Blog later this week. Stay tuned!

The post C++ with Visual Studio 2019 and Windows Subsystem for Linux (WSL) appeared first on C++ Team Blog.


Announcing Kubernetes integration for Azure Pipelines

$
0
0

Kubernetes and Docker containers have become an important part of many organizations’ stack, as they move to transform their business digitally. Kubernetes increases the agility of your infrastructure, so you can run your apps reliably at scale. At the same time, customers who are using it have started focusing more on adopting DevOps practices to make the development process more agile too, and are implementing Continuous Integration and Continuous Delivery pipelines built around containers.

The new Azure Pipelines features we are announcing today are designed to help our customers build applications with Docker containers and deploy them to Kubernetes clusters, including Azure Kubernetes Service. These features are available right now to all Azure Pipelines customers, in preview.

Getting started with CI/CD pipelines and Kubernetes

We understand that one of the biggest blockers to adopting DevOps practices with containers and Kubernetes is setting up the required “plumbing”. We believe developers should be able to go from a Git repo to an app running inside Kubernetes in as few steps as possible. With Azure Pipelines, we aim at making this experience straightforward, automating the creation of the CI/CD definitions, as well as the Kubernetes manifest.

When you create a new pipeline, Azure DevOps automatically scans the Git repository and suggests recommended templates for container-based application. Using the templates, you have the option to automatically configure CI/CD and deployments to Kubernetes.

  1. Start by selecting your repository which has your application code and Dockerfile. Selecting a repository
  2. Azure Pipelines analyzes the repository and suggests the right set of YAML templates, so you don’t need to configure it manually.
    For example, here we’ve identified that the repository has a Node.js application and a Dockerfile. Azure Pipelines then suggests multiple templates, including “Docker image” (for CI only: build the Docker image and push it to a registry), and “Deploy to Azure Kubernetes Service” (which includes both CI and CD).
    Selecting a template to use
  3. Once you select the AKS template, you will be asked for the names of the AKS cluster, container registry and namespace; these are the only inputs you need to provide. Azure Pipelines auto-fills the image name and service port.
    Kubernetes cluster configuration
  4. The platform then auto-generates the YAML file for Azure Pipelines, and the Kubernetes manifest for deploying to the cluster. Both will be committed to your Git repository, so you get full configuration-as-code.
    Review YAML

That’s it! You’ve configured a pipeline for an AKS cluster in four steps.

Azure Pipelines offers then a set of rich views to monitor the progress and the pipeline execution summary.

Pipeline execution summary

Azure Pipelines’ getting started experience takes care of creating and configuring the pipeline without the user needing to know any of the Kubernetes concepts. Developers only need a code repo with a Dockerfile. Once the pipeline is set up you can modify its definition by using the new YAML editor, with support for IntelliSense smart code completion. You have full control, so you can add more steps like testing, or even bring in your Helm charts for deploying apps.

As we are launching this new experience in preview, we are currently optimizing it for Azure Kubernetes Service (AKS) and Azure Container Registry (ACR). Other Kubernetes clusters, for example running on-premises or in other clouds, as well as other container registries, can be used, but require setting up a Service Account and connection manally. We are working on an improved UI to simplify adding other Kubernetes clusters and Docker registries in the next few months.

Deploying to Kubernetes

Azure Pipelines customers have been able to deploy apps to Kubernetes clusters by using the built-in tasks for kubectl and Helm. It’s also possible to run custom scripts to achieve the same results. While both those methods can be effective, they also come with some quirks that are necessary to make deployments work correctly. For example, when you are deploying a container image to Kubernetes, the image tag keeps changing with each pipeline run. Customers need to ensure they’re using some tokenization to update their Helm chart or Kubernetes manifest files. Simply running the command could also result in scenarios where pipeline run was successful (because the command returned successfully), but the app deployment failed for other reasons, for example an imagePullSecret value not set. Solving these issues would require writing more scripts to check the state of deployments.

To simplify this, we are introducing a new “Deploy Kubernetes manifests” task, available in preview. This task goes beyond just running commands, solving some of the problems that customers face when deploying to Kubernetes. It includes features such as deployment strategies, artifact substitution, metadata annotation, manifest stability check, and secret handling.

When you use this task to deploy to a Kubernetes cluster, the task annotates the Kubernetes objects with CI/CD metadata like the pipeline run ID. This helps with traceability: in case you want to know how and when a specific Kubernetes object was created, you can just look that up with the annotation details of the Kubernetes objects (like pod, deployment, etc).

We have improved the Kubernetes service connection to cover all the different ways in which you can connect and deploy to a cluster. We understand that Kubernetes clusters are often used by multiple teams, deploying different microservices, and a key requirement is to give each team permission to a specific namespace.

The new Kubernetes manifest task can be defined as YAML too, for example:

steps:
- task: "KubernetesManifest@0"
  displayName: "Deploy"
  inputs:
    kubernetesServiceConnection: "someK8sSC1"
    namespace: "default"
    manifests: "manifests/*"
    containers: 'foobar/demo:$(tagVariable)'
    imagePullSecrets: |
      some-secret
      some-other-secret

Now you can connect to the Kubernetes cluster by using Service Account details or by passing on the kubeconfig file. Alternatively, for users of Azure Kubernetes Service, you can use the Azure subscription details to have Azure DevOps automatically create a Service Account scoped to a specific cluster and namespace.

Hybrid and multi-cloud support

You can use our Kubernetes features irrespective of where your cluster is deployed to, including on-premises and on other cloud providers, enabling simple application portability. It also supports other Kubernetes-based distributions such as OpenShift. You can use Service Accounts to target any Kubernetes cluster, as described in the documentation.

Get started & feedback

You can get started with Azure Pipelines by creating a free account. Azure Pipelines, part of Azure DevOps, is free for individuals and small teams of up to five.

Additionally, if you’re looking for a way to get started quickly with Kubernetes, Azure Kubernetes Service provides a fully-managed Kubernetes cluster running on the cloud; try it out with a free Azure trial account. After that, check out the full documentation available on integrating Azure Pipelines with Kubernetes.

As always, if you have feedback for our Azure Pipelines team, feel free to comment below, or send us a tweet to @AzureDevOps.

The post Announcing Kubernetes integration for Azure Pipelines appeared first on Azure DevOps Blog.

Bing Maps SDK Public Preview for Android and iOS Launches Today

$
0
0

We are excited to announce that the Bing Maps SDK for Android and iOS is now available for public preview.

Experience the power of Bing Maps on mobile! Bing Maps SDK for Android and iOS features new controls powered by a full vector 3D map engine with a number of standard mapping capabilities running with native performance. Aside from the standard features you would expect from a map control, we are bringing in the same 3D Native support you know from our Windows UWP control. Add dimension to both road and aerial views with worldwide 3D elevation data (via our Digital Elevation Model). In addition, add context to road maps with our symbolic 3D buildings. Of course, if 3D is not needed for your scenario, we support a standard Web Mercator projection.

Bing Maps SDK - Native Control Screenshots

You can find a full list our preview features in our documentation.

With the new Bing Maps controls for Android and iOS, mobile now compliments our multiplatform ecosystem. Create maps with similar look and feel across Web, Windows, and Mobile. When your product requires a unique look and feel, it’s simple to go over to the Maps Style Sheet Editor and create a personalized map style. This personalization is easily integrated into all our supported platforms and helps you deliver a consistent user experience.

Maps Style Editor

We can’t wait to see how our new SDK will be used! To get started head over to our documentation. Finally, we want to hear from you. Please help us shape our feature roadmap and let us know what is most important to you. Join the conversation on GitHub or send an email to bmnsdk@microsoft.com.

Happy mapping!

- Bing Maps team

Bing Maps releases three new services: Mixed Reality Map Control, iOS & Android Controls preview & MIO API

$
0
0

The past several weeks had a lot of exciting developments when it comes to Bing Maps. As the team prepared for Microsoft Build 2019, we were also busy working on several next level new releases that really show what maps can do! We’re making it easier than ever to build immersive, flexible and optimized experiences using Bing Maps APIs.

Maps SDK – A Mixed Reality Map Control

This latest release brings all the cool things about 3D maps to Unity developers. As a map control for Unity, Maps SDK makes it possible to fold Bing Maps’ 3D world data into Unity-based, mixed reality experiences. This control is drag-and-drop and provides an off-the-shelf 3D map, customizable controls, along with the building blocks for creative mixed reality map experiences.


Get the full story at the Garage blog.

Native controls preview for iOS and Android

Flexibility is the name of the game for our new Bing Maps SDK public preview for Android and iOS. This new library makes it easier than ever to build mapping applications for iOS and Android, providing a consistent experience across platforms. The new SDK for iOS and Android features new controls powered by a full vector 3D map engine with a number of standard mapping capabilities running with native performance. With this SDK, you can maintain consistent user experiences including custom styling across these platforms.

Get more details from the announcement blog post.

Next level routing – Multi-Itinerary optimization

Routing can become quite the complex operation when you have multiple stops, timeframes, and agents to consider. Factor in the ebb and flow of traffic and you have quite the problem to solve. Enter the Bing Maps Multi-Itinerary optimization (MIO) API. This new API automates route optimization processes by creating itineraries that support multi-day route planning, multiple drivers and shifts, service time windows, priorities and dwell time at each destination, all while considering the predicted traffic on the route and customers’ preferences for reducing cost related to travel time and distance.

Read the announcement blog post to learn even more about this powerful API.

We are excited about the new developments and look forward to seeing all the cool things you build with them. To learn more about the Bing Maps APIs, go to our website. To connect with the team and provide feedback or suggestions about the next great thing we should work on, reach out to us on the Bing Maps Forum.

- Bing Maps team

Announcing ML.NET 1.0

$
0
0


We are excited to announce the release of ML.NET 1.0 today.  ML.NET is a free, cross-platform and open source machine learning framework designed to bring the power of machine learning (ML) into .NET applications.

ML.NET Logo

https://github.com/dotnet/machinelearning Star

Get Started @ http://dot.net/ml

ML.NET allows you to train, build and ship custom machine learning models using C# or F# for scenarios such as sentiment analysis, issue classification, forecasting, recommendations and more.  You can check out these common scenarios and tasks at our ML.NET samples repo.

ML.NET was originally developed within Microsoft Research, and evolved into a significant framework used by many Microsoft products such as Windows Defender, Microsoft Office (Powerpoint design ideas, Excel Chart recommendations), Azure Machine Learning, PowerBI key influencers to name a few!

Since its launch ML.NET is being used by many organizations like SigParser (Spam Email Detection),  William Mullens (Legal Issue Classification) and Evolution Software (Moisture Level Detection for Hazelnuts). You can follow the journey of these and many other organisations using ML.NET at our ML.NET customer showcase.  These users tell us that the ease of use of ML.NET, ability to reuse their .NET skills and keeping their tech stack entirely in .NET are primary drivers for their use of ML.NET.

Along with the ML.NET 1.0 release we are also adding new preview features like the power of Automated machine learning (AutoML) and new tools like ML.NET CLI and ML.NET Model Builder which means adding machine learning models to your applications is now only a right click away!

ML.NET Logo
The remainder of this post focuses on these new experiences.

ML.NET Core Components

ML.NET is aimed at providing the end-end workflow for consuming ML into .NET apps across various steps of machine learning (pre-processing, feature engineering, modeling, evaluation and operationalization). ML.NET 1.0 provides the following key components:

  • Data Representation
    • Fundamental ML data pipeline data-types such as IDataView – the fundamental data pipeline type
    • Readers to support reading data from delimited text files or IEnumerable of objects
  • Support for machine learning tasks:
    • Binary Classification
    • Multi-Class classification
    • Regression
    • Ranking
    • Anomaly Detection
    • Clustering
    • Recommendation (preview)
  • Data Transformation and featurization
    • Text
    • Categories
    • Feature Selection
    • Normalization and missing value handling
    • Image featurization
    • Time Series (preview)
    • Support for ONNX and TensorFlow model integration (preview)
  • Other
    • Model understanding and explainability
    • User-defined custom transformations
    • Schema operations
    • Support for dataset manipulation and cross-validation

 

Automated Machine Learning Preview

Getting started with machine learning today involves a steep learning curve.  When building custom machine learning models, you have to figure out which machine learning task to pick for your scenario (i.e. classification or regression?), transform your data into a format that ML algorithms can understand (e.g. textual data -> numeric vectors), and fine tune these ML algorithms to provide best performance. If you are new to ML each of these steps can be quite daunting!

Automated Machine Learning makes your journey with ML simpler by automatically figuring out how to transform your input data and selecting the best performing ML algorithm with the right settings allowing you to build best-in-class custom ML models easily.

AutoML support in ML.NET is in preview and we currently support Regression (used for scenarios like Price Prediction) and Classification (used for scenarios like Sentiment Analysis, Document Classification, Spam Detection etc.) ML tasks.

You can try out the AutoML experience in ML.NET in three form factors using ML.NET Model Builder, ML.NET CLI or by using the AutoML API directly (samples can be found here).

For users who are new to Machine Learning we recommend starting with the ML.NET Model Builder in Visual Studio and the ML.NET CLI on any platform.  The AutoML API is also very handy for scenarios where you want to build models on the fly.

Model Builder Preview

In order to simplify the journey of .NET developers to build ML Models, we are today also excited to announce ML.NET Model Builder. With ML.NET Model builder adding machine learning to your apps is only a right-click away!

Model Builder is a simple UI tool for developers which uses AutoML to build best in class ML models using the dataset you provide. In addition to this, Model Builder also generates model training and model consumption code for the best performing model allowing you to quickly add ML to your existing application.

ML.NET Model Builder

Learn more about the ML.NET Model Builder

Model Builder is currently in preview and we would love for you to try it out and tell us what you think!

ML.NET CLI Preview

The ML.NET CLI (command-line interface) is another new tool we are introducing today!

ML.NET CLI is a dotnet tool which allows for generating ML.NET Models using AutoML and ML.NET. The ML.NET CLI quickly iterates through your dataset for a specific ML Task (currently supports regression and classification) and produces the best model.

The CLI in addition to producing the best model also allows users to generate model training and model consumption code for the best performing model.

ML.NET CLI is cross-platform and is an easy add-on to the .NET CLI. The Model Builder Visual Studio extension also uses ML.NET CLI to provide model builder capabilities.

You can install the ML.NET CLI through this command.

dotnet tool install -g mlnet

Following picture shows the ML.NET CLI building a sentiment analysis dataset.

ML.NET CLI

Learn more about the ML.NET CLI

ML.NET CLI is also currently in preview and we would love for you to try it out and share your thoughts below!

Get Started!

If you haven’t already, getting started with ML.NET is easy and you can do so in a few simple steps as shown below. The example below shows how you can perform sentiment analysis with ML.NET.

//Step 1. Create a ML Context
var ctx = new MLContext();

//Step 2. Read in the input data for model training
IDataView dataReader = ctx.Data
    .LoadFromTextFile(dataPath, hasHeader: true);

//Step 3. Build your estimator
IEstimator<ITransformer> est = ctx.Transforms.Text
    .FeaturizeText("Features", nameof(SentimentIssue.Text))
    .Append(ctx.BinaryClassification.Trainers
        .LbfgsLogisticRegression("Label", "Features"));

//Step 4. Train your Model
ITransformer trainedModel = est.Fit(dataReader);

//Step 5. Make predictions using your model
var predictionEngine = ctx.Model
    .CreatePredictionEngine<MyInput, MyOutput>(trainedModel);

var sampleStatement = new MyInput { Text = "This is a horrible movie" };

var prediction = predictionEngine.Predict(sampleStatement);

You can also explore various other learning resources like tutorials and resources for ML.NET along with ML.NET samples demonstrating popular scenarios like product recommendation, anomaly detection and more in action.

What’s next with ML.NET

While we are very excited to release ML.NET 1.0 today, the team is already hard at work towards enabling the following features for ML.NET release post 1.0.

  • AutoML experience for additional ML scenarios
  • Improved support for deep learning scenarios
  • Support for other additional sources like SQL Server, CosmosDB, Azure Blob storage and more.
  • Scale-out on Azure for model training and consumption
  • Support for additional ML scenarios and features when using Model Builder and CLI
  • Native integration for machine learning at scale with .NET for Apache Spark and ML.NET
  • New ML Types in .NET e.g. DataFrame

You helped build it

A special callout to these amazing contributors who have been with us on this journey for making machine learning accessible to .NET developers with ML.NET.

amiteshenoybeneyalbojanmisicCarauldan-drewsDAXaholicdhilmathydzban2137elbrunoendintiersf1x3d,feiyun0112forkiharshsaver,
helloguohvitvedJongkeunJorgeAnddJoshuaLightjwood803kant2002kilickKy7m,llRandommalik97160MarcinJuraszekmareklinka,
Matei13mfaticaearninmnboosnandaleiteNepomuceno nihitb06,Niladri24duttaPaulTFreedmanPielgrinpkulikovPotapy4Racing5372,
rantrirantrirauhsrobosekross-p-smithSolyarA,SorriensuhailsinghbainsteropThePiranhaThomas-S-Btimitoctincannv-tsymbalistyi,
van-tienhoangveikkoeeva,and yamachu

If you haven’t already, give ML.NET a try!

Your feedback is critical for us to help shape ML.NET and make .NET a great platform for machine learning.

Thanks,
ML.NET team

The post Announcing ML.NET 1.0 appeared first on .NET Blog.

.NET Core is the Future of .NET 

$
0
0

We introduced .NET Core 1.0 on November 2014. The goal with .NET Core was to take the learning from our experience building, shipping and servicing .NET Framework over the previous 12 years and build a better product. Some examples of these improvements are side-by-side installations (you can install a new version and not worry about breaking existing apps), self-contained applications (applications can embed .NET, so .NET does not need to be on the computer), not being a component of the Windows operating system (.NET ships new releases independent of the OS schedule) and many more. On top of this, we made .NET Core open source and cross platform. 

.NET Core 1.0 was primarily focused on high performance web and microservices. .NET Core 2.0 added 20K more APIs and components like Razor Pages and SignalR, making it easier to port web applications to .NET Core. And now .NET Core 3.0 embraces the desktop by adding WinForms, WPF and Entity Framework 6 making it possible to port desktop applications to .NET Core.  

After .NET Core 3.0 we will not port any more features from .NET Framework. If you are a Web Forms developer and want to build a new application on .NET Core, we would recommend Blazor which provides the closest programming model. If you are a remoting or WCF developer and want to build a new application on .NET Core, we would recommend either ASP.NET Core Web APIs or gRPC (Google RPC, which provides cross platform and cross programming language contract based RPCs). If you are a Windows Workflow developer there is an open source port of Workflow to .NET Core 

With the .NET Core 3.0 release in September 2019 we think that all *new* .NET applications should be based on .NET Core. The primary application types from .NET Framework are supported, and where we did not port something over there is a recommended modern replacement. All future investment in .NET will be in .NET Core. This includes: Runtime, JIT, AOT, GC, BCL (Base Class Library), C#, VB.NET, F#, ASP.NET, Entity FrameworkML.NET, WinForms, WPF and Xamarin. 

.NET Framework 4.8 will be the last major version of .NET Framework. If you have existing .NET Framework applications that you are maintaining, there is no need to move these applications to .NET Core. We will continue to both service and support .NET Frameworkwhich includes bug, reliability and security fixes. It will continue to ship with Windows (much of Windows depends on .NET Framework) and we will continue to improve the tooling support for .NET in Visual Studio (Visual Studio is written on .NET Framework). 

Summary 

New applications should be built on .NET Core. .NET Core is where future investments in .NET will happen. Existing applications are safe to remain on .NET Framework which will be supported. Existing applications that want to take advantage of the new features in .NET should consider moving to .NET Core. As we plan into the future, we will be bringing in even more capabilities to the platform. You can read about our plans here. 

The post .NET Core is the Future of .NET  appeared first on .NET Blog.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>