Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

AI, Machine Learning and Data Science Roundup: January 2020

$
0
0

A roundup of news about Artificial Intelligence, Machine Learning and Data Science. This is an eclectic collection of interesting blog posts, software announcements and data applications from Microsoft and elsewhere that I've noted recently.

Open Source AI, ML & Data Science News

Pandas 1.0.0 is released, a milestone for the ubiquitous Python data frame package.

Tensorflow 2.1.0 is released, the last Tensorflow release to support Python 2.

Karate Club, a library of state-of-the-art methods for unsupervised learning on graph structured data, built on the NetworkX Python package.

OpenAI announces it is standardizing on PyTorch, a move likely to affect future releases of deep learning models released by the research institution.

sparklyr is now a Linux Foundation incubation project, and version 1.1 of the Spark-and-R interface is now available.

DiCE, an open-source Python library for Diverse Counterfactual Explanations for machine learning models, from Microsoft Research.

Industry News

GCP releases Auto Data Exploration and Feature Recommendation Tool, to speed up the process of preparing data for machine learning.

Google Dataset Search, a search engine for finding public datasets, is now generally available.

Metaflow, a "human-centric" framework for data science in Python, released as open source by Netflix and AWS.

RStudio reorganizes as a Public Benefit Corporation, with a charter to create free and open-source software for data science, scientific research, and technical communication.

Facebook open sources VizSeq, a Python toolkit that simplifies visual analysis on a wide variety of text generation tasks.

Open Data on AWS, a new service to share data and include it in the Registry of Open Data on AWS.

Microsoft News

Microsoft launches AI for Health, a $40M five-year program that provides grants, data science experts, technology, and other resources to tackle health issues with AI.

The Python extension for Visual Studio Code adds kernel selection for Notebooks, auto-activation of environments in the terminal, and other improvements.

Azure Machine Learning service now supports VMs with single root input/output virtualization and InfiniBand, to speed up the process of training large deep learning models like BERT.

Microsoft Video Indexer now supports multi-language speech transcription, and extraction of high-resolution key frames for use with Custom Vision.

Microsoft Translator API now supports custom translations, to incorporate domain-specific vocabulary and idioms.

InterpretML, an open-source Python package that implements Explainable Boosting Machine to train interpretable machine learning models and explain black-box systems, is now available as an Alpha release.

fairlearn 0.4.1, the latest release of the Python package for assessing the fairness and mitigating the observed unfairness of AI systems, is now available.

Learning resources

Computer Vision Recipes, a repository of examples and best practice guidelines for building computer vision systems. Includes Jupyter notebooks and utility functions based on PyTorch for several scenarios.

NLP Recipes, a repository of Natural Language Processing best practices and examples. Includes Jupyter notebooks implementing state-of-the-art methods and common scenarios that are popular among researchers and practitioners working on problems involving text and language.

The AI Now Institute 2019 report offers 12 recommendations on what policymakers, advocates and researchers can do to address the use of AI in ways that widen inequality.

Stanford's AI Index 2019 report tracks data, metrics and trends related to Artificial Intelligence research, applications, and impact.

Mathematics for Machine Learning, a book (by Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong) to teach the mathematical concepts underlying AI implementations. (PDF available online.)

How to select algorithms for Azure Machine Learning, a guide to selecting machine learning methods by problem type and other requirements.

A tutorial on Automated Machine Learning for time series forecasting, with Azure Machine Learning.

People+AI Guidebook, resources for designing human-centered AI products from Google.

Data Project Checklist, a questionnaire to guide setting up a new data science project from Fast AI's Jeremy Howard.

PyTorch: An Imperative Style, High-Performance Deep Learning Library, the first full research paper on the popular framework.

Applications

AI Dungeon 2, a free-form text adventure game for mobile and browsers. Based on GPT-2, the interactive story can go in just about any direction.

Generative music playing in the lobby of a NYC hotel (created by Musician Björk in partnership with Microsoft) reacts to weather and migrating flocks of birds detected in a rooftop camera.

Facebook develops an AI system based on neural machine translation that can solve mathematical equations.

DialoGPT, a large-scale pretrained model from Microsoft Research that generates human-quality conversational responses.

VisualizeMNIST, a browser-based tool to interactively visualize the layers of a simple digit recognition model.

Find previous editions of the AI roundup here.


Mind your Margins!

$
0
0

Introduction

The search box is the most important piece of UX on our page. It won’t be an overstatement to say that the search box is the most important piece of UX on any search engine. As the front line between us and what customers are looking for, it is very important that the search box is:

  • Clear (easy to spot),
  • Responsive (low latency),
  • Intelligent (provides relevant suggestions).

At Bing, every pixel of UX earns its position and size on the page. We put every UX element through rigorous design reviews and multiple controlled experiments. No change to the UX is too trivial, no change passes through un-verified. Besides this all the various ways users interact with the different UX elements on our page(s) are analyzed constantly. During one such exercise we noticed that some of our customers were having a sub-optimal experience with our search box. Some of their clicks were being ignored by the search box. As we dug deeper the investigation led us to recognize the power of detailed instrumentation and the impact that small tweaks in the UX can have on overall customer satisfaction. Along the way we uncovered that this issue was not unique to Bing. In fact, we saw it on websites big and small throughout the web. It turns out,  “Mind your margins!”, a phrase you might have heard from your English teacher, is still relevant and is applicable to search boxes on many of the world’s premiere websites.

The search box as it appears to the users on Bing.com.
Figure 1: The search box as it appears to the users on Bing.com.
 

The Issue: Missed Clicks

While analyzing user interaction (with our web pages) data on Bing.com, something caught our attention recently. We noticed that a non-trivial percentage of our users clicked multiple times on the search box. In some cases, however, the number of clicks were way more than the number of searches or re-queries issued by the user. When we dug deeper into these interactions using our in-house web instrumentation library, Clarity (which lets us replay user interactions), we were able to determine that in a large number of such cases our users’ clicks were being missed. A “missed click” is a click by the user that does not bring about any change in the UX or the state of the web page. It is as if the click never happened and is a common UX issue for buttons and links on many web properties. Missed clicks anywhere on the page are not good but especially bad when they are on the search box which is the most important piece of UX on Bing.

To see this in action look at the video snippet below:

.
Figure 2: Shows missed click on the search box on Bing.com


Missed clicks are not easy to detect, consider a user clicking somewhere on the page you did not anticipate (not clicking a link or an image or a button but clicking on something unclickable like text or an empty space). Will your instrumentation take that signal to your data warehouse? Most websites today will miss that the user clicked somewhere on the page if it was not a button or a link. Luckily for us at Bing, Clarity was able to detect not just missed clicks but many other subtle user interaction patterns with our web pages. Clarity was able to show us that sometimes even though users were clicking on our search box multiple times, their clicks were being missed. We were then able to quantify that 4% of all the users that clicked on the search box had one or more missed clicks

 

The Cause: Margins

Once we noticed the missed clicks, we then wanted to find out the exact location of where these were occurring on the search box. Immediately one location jumped out at us. We noticed that most missed clicks were occurring on the left corner of the search box i.e. margin or the area between the html form control that contains the search box and the search box itself (shown in orange below). Since both the form and the search box have the same background color, it was not possible for the users to know that they were clicking on the margin of the search box. Click events on this margin, therefore, were not passed to the search box thereby causing missed clicks. It was now clear to us why we were losing 4% of clicks on the search box (The area covered by the margins, orange area, which is not clickable is around 10% of the area of the search box shown in blue + green).
 

Figure 3: Clicks on the orange area to the left and top of the  search box, were not handled by the event handlers associated with search box and were being ignored.
Figure 4: CSS Box Model of the Bing Search box
 

The Fix

The fix was straightforward, we had to reduce the margins between the search box and the html form that contained the search box, specifically the top, left and bottom margins. The right margins were less of an issue since the presence of the “search by image” icon and the spy glass icon for search gave users a clear visual clue that this part of the control was not for text in search box. The Figure below shows one of the many treatments we tried for the fix, reducing the top margin by 4px and reducing the left margin by 19 px made almost the entire area of the form control (which contains the search box) clickable and all but eliminated the missed clicks on the search box, while maintaining visual parity with the control. As an aside keeping visual parity between control and treatment was important as we wanted to isolate any metric movements just to the elimination of search box margins.

Figure 5: Removal of margins on the search box all but completely hides the orange area and eliminates missed clicks.


Figure 6: Video showing the fix, no more missed clicks on the Bing search box.
 

Results / Gauging User Impact

Once we rolled out the fix to production, missed clicks on the search box all but vanished, and not only that - the results from multiple flights suggested positive movements in our user satisfaction metrics as well. We saw the User SAT metrics like traffic utility rate and SSRX (the two hardest to move metrics historically) showed high stat significant movement in the positive direction. Eliminating missed clicks on the search box alone ended up improving the Session Success Rate Metric (SSRX) by 0.017% and the Traffic utility rate went up by 0.3%. Both these metrics are extremely hard to move and have been shown to impact not just user satisfaction with the search results page but also long-term user retention.

We would have shipped this change just to fix the missed clicks issue but when we saw the positive impact on these metrics, it was icing on the cake. Yet again we learned that even small changes in user interface can deliver significant user impact all up.

 

Not Unique to Bing

Armed with our success on Bing, we were curious to investigate whether other websites with search boxes had such margins as well, after all a margin between the search box and the container html control (both with the same background color) is a common UX pattern present on multiple sites. We found that other popular websites including search engines and social media have unclickable areas (due to margins) on their search boxes too and may be experiencing missed clicks. It is possible that these websites can impact their users positively by using the fix we applied in Bing.

 

Conclusion

As we wrapped up our investigation we were left with a few key takeaways:

  1. Small tweaks matter, even tiny changes to the UX can lead to a significant impact on user satisfaction. 
  2. Don’t forget Fitts law, don’t make it hard for users to click where you want them to.
  3. Scale multiplies the impact of small improvements and just as much multiplies the negative impact of minor annoyances. Even an issue which will affect a tiny fraction of your users might lead to thousands even hundreds of thousands of users with a sub optimal experience.
  4. You can’t afford to have blind spots in your web page instrumentation, any user action, no matter how trivial,  taken on your page should be instrumented stored and analyzed.
  5. Don’t take popular UX patterns for granted. 

And finally, when it comes to your search boxes, “Mind your Margins!”  😊

Clarity: Fine grained instrumentation to track user interactions

We cannot overstate the contribution of Clarity in underscoring and helping us identify this issue. Clarity our website analytics tool developed in house, played a pivotal role in this investigation and showed us the impact of the issue on our user base. As mentioned earlier, missed clicks anywhere on a web page are not tracked on most websites, fortunately clarity keeps track of all user interactions and DOM mutations on the Bing.com webpages, while allowing web masters powerful privacy controls. It provided us with a trove of data with missed clicks and their precise location on the page which helped us not only understand the impact of the missed clicks but also identify the fix(es) necessary. If you are a webmaster, we strongly encourage you to explore this tool by visiting https://clarity.microsoft.com and applying for the free pilot so you can start reaping the benefits in just a few clicks. To learn more about how clarity tracks user interaction check out clarity project page on GitHub



Mind your Margins!

$
0
0

Introduction

The search box is the most important piece of UX on our page. It won’t be an overstatement to say that the search box is the most important piece of UX on any search engine. As the front line between us and what customers are looking for, it is very important that the search box is:

  • Clear (easy to spot),
  • Responsive (low latency),
  • Intelligent (provides relevant suggestions).

At Bing, every pixel of UX earns its position and size on the page. We put every UX element through rigorous design reviews and multiple controlled experiments. No change to the UX is too trivial, no change passes through un-verified. Besides this all the various ways users interact with the different UX elements on our page(s) are analyzed constantly. During one such exercise we noticed that some of our customers were having a sub-optimal experience with our search box. Some of their clicks were being ignored by the search box. As we dug deeper the investigation led us to recognize the power of detailed instrumentation and the impact that small tweaks in the UX can have on overall customer satisfaction. Along the way we uncovered that this issue was not unique to Bing. In fact, we saw it on websites big and small throughout the web. It turns out,  “Mind your margins!”, a phrase you might have heard from your English teacher, is still relevant and is applicable to search boxes on many of the world’s premiere websites.

The search box as it appears to the users on Bing.com.
Figure 1: The search box as it appears to the users on Bing.com.
 

The Issue: Missed Clicks

While analyzing user interaction (with our web pages) data on Bing.com, something caught our attention recently. We noticed that a non-trivial percentage of our users clicked multiple times on the search box. In some cases, however, the number of clicks were way more than the number of searches or re-queries issued by the user. When we dug deeper into these interactions using our in-house web instrumentation library, Clarity (which lets us replay user interactions), we were able to determine that in a large number of such cases our users’ clicks were being missed. A “missed click” is a click by the user that does not bring about any change in the UX or the state of the web page. It is as if the click never happened and is a common UX issue for buttons and links on many web properties. Missed clicks anywhere on the page are not good but especially bad when they are on the search box which is the most important piece of UX on Bing.

To see this in action look at the video snippet below:

.
Figure 2: Shows missed click on the search box on Bing.com


Missed clicks are not easy to detect, consider a user clicking somewhere on the page you did not anticipate (not clicking a link or an image or a button but clicking on something unclickable like text or an empty space). Will your instrumentation take that signal to your data warehouse? Most websites today will miss that the user clicked somewhere on the page if it was not a button or a link. Luckily for us at Bing, Clarity was able to detect not just missed clicks but many other subtle user interaction patterns with our web pages. Clarity was able to show us that sometimes even though users were clicking on our search box multiple times, their clicks were being missed. We were then able to quantify that 4% of all the users that clicked on the search box had one or more missed clicks

 

The Cause: Margins

Once we noticed the missed clicks, we then wanted to find out the exact location of where these were occurring on the search box. Immediately one location jumped out at us. We noticed that most missed clicks were occurring on the left corner of the search box i.e. margin or the area between the html form control that contains the search box and the search box itself (shown in orange below). Since both the form and the search box have the same background color, it was not possible for the users to know that they were clicking on the margin of the search box. Click events on this margin, therefore, were not passed to the search box thereby causing missed clicks. It was now clear to us why we were losing 4% of clicks on the search box (The area covered by the margins, orange area, which is not clickable is around 10% of the area of the search box shown in blue + green).
 

Figure 3: Clicks on the orange area to the left and top of the  search box, were not handled by the event handlers associated with search box and were being ignored.
Figure 4: CSS Box Model of the Bing Search box
 

The Fix

The fix was straightforward, we had to reduce the margins between the search box and the html form that contained the search box, specifically the top, left and bottom margins. The right margins were less of an issue since the presence of the “search by image” icon and the spy glass icon for search gave users a clear visual clue that this part of the control was not for text in search box. The Figure below shows one of the many treatments we tried for the fix, reducing the top margin by 4px and reducing the left margin by 19 px made almost the entire area of the form control (which contains the search box) clickable and all but eliminated the missed clicks on the search box, while maintaining visual parity with the control. As an aside keeping visual parity between control and treatment was important as we wanted to isolate any metric movements just to the elimination of search box margins.

Figure 5: Removal of margins on the search box all but completely hides the orange area and eliminates missed clicks.


Figure 6: Video showing the fix, no more missed clicks on the Bing search box.
 

Results / Gauging User Impact

Once we rolled out the fix to production, missed clicks on the search box all but vanished, and not only that - the results from multiple flights suggested positive movements in our user satisfaction metrics as well. We saw the User SAT metrics like traffic utility rate and SSRX (the two hardest to move metrics historically) showed high stat significant movement in the positive direction. Eliminating missed clicks on the search box alone ended up improving the Session Success Rate Metric (SSRX) by 0.017% and the Traffic utility rate went up by 0.3%. Both these metrics are extremely hard to move and have been shown to impact not just user satisfaction with the search results page but also long-term user retention.

We would have shipped this change just to fix the missed clicks issue but when we saw the positive impact on these metrics, it was icing on the cake. Yet again we learned that even small changes in user interface can deliver significant user impact all up.

 

Not Unique to Bing

Armed with our success on Bing, we were curious to investigate whether other websites with search boxes had such margins as well, after all a margin between the search box and the container html control (both with the same background color) is a common UX pattern present on multiple sites. We found that other popular websites including search engines and social media have unclickable areas (due to margins) on their search boxes too and may be experiencing missed clicks. It is possible that these websites can impact their users positively by using the fix we applied in Bing.

 

Conclusion

As we wrapped up our investigation we were left with a few key takeaways:

  1. Small tweaks matter, even tiny changes to the UX can lead to a significant impact on user satisfaction. 
  2. Don’t forget Fitts law, don’t make it hard for users to click where you want them to.
  3. Scale multiplies the impact of small improvements and just as much multiplies the negative impact of minor annoyances. Even an issue which will affect a tiny fraction of your users might lead to thousands even hundreds of thousands of users with a sub optimal experience.
  4. You can’t afford to have blind spots in your web page instrumentation, any user action, no matter how trivial,  taken on your page should be instrumented stored and analyzed.
  5. Don’t take popular UX patterns for granted. 

And finally, when it comes to your search boxes, “Mind your Margins!”  😊

Clarity: Fine grained instrumentation to track user interactions

We cannot overstate the contribution of Clarity in underscoring and helping us identify this issue. Clarity our website analytics tool developed in house, played a pivotal role in this investigation and showed us the impact of the issue on our user base. As mentioned earlier, missed clicks anywhere on a web page are not tracked on most websites, fortunately clarity keeps track of all user interactions and DOM mutations on the Bing.com webpages, while allowing web masters powerful privacy controls. It provided us with a trove of data with missed clicks and their precise location on the page which helped us not only understand the impact of the missed clicks but also identify the fix(es) necessary. If you are a webmaster, we strongly encourage you to explore this tool by visiting https://clarity.microsoft.com and applying for the free pilot so you can start reaping the benefits in just a few clicks. To learn more about how clarity tracks user interaction check out clarity project page on GitHub



Retrogaming by modding original consoles to remove moving parts and add USB or SD-Card support

$
0
0

I'm a documented big fan of Retrogaming (playing older games and introducing my kids to those older games).

For example, we enjoy the Hyperkin Retron 5 in that it lets us play NES, Famicom, SNES, Super Famicom, Genesis, Mega Drive, Game Boy, Game Boy Color, & Game Boy over 5 category ports. with one additional adapter, it adds Game Gear, Master System, and Master System Cards. It uses emulators at its heart, but it requires the use of the original game cartridges. However, the Hyperkin supports all the original controllers - many of which we've found at our local thrift store - which strikes a nice balance between the old and the new. Best of all, it uses HDMI as its output plug which makes it super easy to hook up to our TV.

I've also blogged about modding/updating existing older consoles to support HDMI. On my Sega Dreamcast I've been very happy with this Dreamcast to HDMI adapter (that's really internally Dreamcast->VGA->HDMI).

The Dreamcast was lovely

When retrogaming there's a few schools of thought:

  • Download ROMs and use emulators - I try not to do this as I want to support small businesses (like used game stores, etc) as well as (in a way) the original artists.
  • Use original consoles with original cartridges
  • Use original consoles with backup images through an I/O mod.
    • I've been doing this more and more as many of my original consoles' CD-ROMs and other moving parts have started to fail.

It's the failure of those moving parts that is the focus of THIS post.

For example, the CD-ROM on my Panasonic 3DO Console was starting to throw errors and have trouble spinning up so I was able to mod it to load the CD-ROMs (for my owned discs) off of USB.

This last week my Dreamcast's GD-ROM finally started to get out of alignment.

Fixing Dreamcast Disc Errors

You can can align a Dreamcast GD-ROM by opening it up by removing the four screws on the bottom. Lift up the entire GD-ROM unit without pulling too hard on the ribbon cable. You may have to push the whole laser (don't touch the lens) back in order to flip the unit over.

Then, via trial and error, turn the screw shown below to the right about 5 degrees (very small turn) and test, then do it again, until your drive spins up reliably. It took me 4 tries and about 20 degrees. Your mileage may vary.

The Dreamcast GD-ROM just pops outTake out the whole GD-ROM

Turn this screw to align your laser on your Dreamcast

This fix worked for a while but it was becoming clear that I was going to eventually have to replace the whole thing. These are moving parts and moving parts wear out.

Adding solid state (SD-Card) storage to a Dreamcast

Assuming you, like me, have a VA1 Dreamcast (which is most of them) there are a few options to "fake" the GD-ROM. My favorite is the GDEMU mod which requires no soldering and can be done in just a few minutes. You can get them directly or on eBay. I ordered a version 5.5 and it works fantastically.

You can follow the GDEMU instructions to lay out a FAT32 formatted SD Card as it wants it, or you can use this little obscure .NET app called GDEMU SD Card Maker.

The resulting Dreamcast now has an SD Card inside, under where the GD-ROM used to be. It works well, it's quiet, it's faster than the GD-ROM and it allows me to play my backups without concern of breaking any moving parts.

Modded Dreamcast

Other small Dreamcast updates

As a moving part, the fan can sometimes fail so I replaced fan my using a guide from iFixit. In fact, a 3-pin 5V Novtua silent fan works great. You can purchase that fan plus a mod kit with a 3d printed adapter that includes a fan duct and conversion gable with 10k resistor, or you can certainly 3D print your own.

If you like this kind of content, go follow me on Instagram!


Sponsor: When you use my Amazon.com affiliate links to buy small things it allows me to also buy small things. Thanks!



© 2019 Scott Hanselman. All rights reserved.
     

Hosting your own NuGet Server and Feed for build artifacts with BaGet

$
0
0

BaGet is a great NuGet alternativeNuGet is the package management system underlying the .NET programming platform. Just like Ruby Gems or NPM Packages, you can bring in 3rd party packages or make your own. The public repository is hosted at http://nuget.org BUT the magic is that there's alternatives! There are lots of alternative servers, as well as alterative clients like Paket.

There's a whole ecosystem of NuGet servers. You can get filtered views, authenticated servers, special virus scanned repositories, your own custom servers where your CI/CD (Continuous Integration/Continuous Deployment) system can publish daily (hourly?) NuGet packages for other teams to consume.

Ideally in a team situation you'll have one team produce NuGet Packages and publish them to a private NuGet feed to be consumed by other teams.

Here's just a few cool NuGet servers or views on NuGet.org:

  • FuGet.org
    • FuGet is "pro nuget package browsing!" Creating by the amazing Frank A. Krueger - of whom I am an immense fan - FuGet offers a different view on the NuGet package library. NuGet is a repository of nearly 150,000 open source libraries and the NuGet Gallery does a decent job of letting one browse around. However, https://github.com/praeclarum/FuGetGallery is an alternative web UI with a lot more depth.
  • Artifactory
    • Artifactory is a, ahem, factory for build artifacts of all flavors, NuGet being just one of them. You can even make your own internal cache of NuGet.org. You can remove or block access to packages you don't want your devs to have.
  • NuGet Gallery
    • You can just run your OWN instance of the NuGet.org website! It's open source
  • NuGet.Server
    • NuGet.Server is an MVP (Minimum Viable Product) of a NuGet Server. It's small and super lightweight but it's VERY limited. Consider using BaGet (below) instead.
  • GitHub Packages
    • GitHub has a package repository with a small free tier, and it also scales up to Enterprise size if you want a "SaaS" offering (software as a service)
  • Azure Artifacts
    • Azure Artifacts can also provide a SaaS setup for your NuGet packages. Set it up and forget it. A simple place for your automated build to drop your build artifacts.
  • MyGet
    • MyGet can hold packages of all kinds, including NuGet.They are well known for their license compliance system, so you can make sure your devs and enterprise are only using the projects your org can support.
  • LiGet
    • A NuGet server with a Linux-first approach
  • BaGet (pronounced baguette)
    • This is one of my favorites. It's a new fresh NuGet server written entirely in ASP.NET Core 3.1. It's cross platform, open source, and runs in Azure, AWS, Google Cloud, behind IIS, or via Docker. Lovely!  It's also a great example of some thoughtfully architected code, good plugin model, nice separation of concerns, and a good test suite. If you are using NuGet.Server now, move over to BaGet!

Let's focus on BaGet for now! Go give them some love/stars on GitHub!

Setting up a cross platform personal NuGet Server with BaGet

BaGet is a lovely little server. So far it supports:

The most initially powerful feature in my opinion is the Read-through caching.

This lets you index packages from an upstream source. You can use read-through caching to:

  1. Speed up your builds if restores from nuget.org are slow
  2. Enable package restores in offline scenarios

This can be great for folks on low bandwidth or remote scenarios. Put BaGet in front of your developers and effectively make a NuGet "edge CDN" that's private to you.

If you are familiar with Docker, you can get a BaGet NuGet server up in minutes. You can also use Azure or AWS or another cloud to store your artifacts in a scaleable way.

NOTE: You'll notice that the docs for things like "running BaGet on Azure" aren't complete. This is a great opportunity for YOU to help out and get involved in open source! Note that BaGet has a number of open issues on their GitHub *and* they've labeled some as "Good First Issue!"

If you want to try running BaGet without Docker, just

  1. Install .NET Core SDK
  2. Download and extract BaGet's latest release
  3. Start the service with dotnet BaGet.dll
  4. Browse http://localhost:5000/ in your browser

That's it! All the details on Getting Started with BaGet are on their GitHub. Go give them some love and stars.



© 2019 Scott Hanselman. All rights reserved.
     

Managing eDiscovery for modern collaboration

$
0
0

Modern organizations require eDiscovery to extend to chat-based communication and collaboration tools. Today, we're pleased to share several new capabilities in Microsoft 365 to help you manage eDiscovery for Microsoft Teams and Yammer with expanded visibility into case content.

The post Managing eDiscovery for modern collaboration appeared first on Microsoft 365 Blog.

Bringing the Microsoft Edge DevTools to more languages

$
0
0

We know inclusivity makes us work better and we love when we find ways to put that knowledge into practice. On the Edge team, we believe—and usage and research show—that developer experiences are more productive when they fit our language and location preferences. Today, we’re excited to move in that direction by announcing that the new Microsoft Edge now features DevTools localized in 10 languages (in addition to English):

Chinese (Simplified) – 中文(简体)(简体) Chinese (Traditional) – 中文(繁體)(繁體)
French – français German – deutsch
Italian – italiano Portuguese – português
Korean – 한국어 Japanese – 日本語
Russian – русский Spanish – español

This adds our new browser tools to a long list of other localized Microsoft developer experiences including VS Code, Azure Portal, and more.

This release is the result of collaboration over many months between our team and the DevTools, Lighthouse, and Chrome teams at Google. We’ve contributed all localizability features upstream (explainer), and plan to continue to do so so that other browsers can benefit from this work.

Try the localized developer tools

Make sure you have “Enable localized Developer Tools” turned on by heading to edge://flags, finding that flag, and setting it to “Enabled” (this is on by default in Canary; on by default soon in Dev, Beta, and Stable channels). Once on, your DevTools will match the language of the browser.
 
On macOS, the developer tools inherit the language from the operating system. You can change the language in the Settings under the “Language and Region” section. Add another language, change it to your primary. Once you restart Microsoft Edge, the developer tools will be in that language.

If you just wanted to try the feature out and wish to revert to English, go to DevTools Settings (F1) > Preferences and click the checkbox to deselect “Match browser language.”

Screenshot of the Edge DevTools in Japanese

What’s next

For the initial release, we went with the top languages used by web developers within our ecosystem. Next, we’re evaluating popular right-to-left languages like Hebrew and Arabic and working on localizing our documentation. If you’d like those languages or other features in the localization and internationalization space, please let us know. We’re always happy to hear your thoughts.

To get in touch, you can Send Feedback from the Microsoft Edge menu (Alt-Shift-I), or share your thoughts with us on Twitter.

Erica Draud, Program Manager, Microsoft Edge DevTools

The post Bringing the Microsoft Edge DevTools to more languages appeared first on Microsoft Edge Blog.

LinkedIn Uses Bing Maps to Calculate the Commute to Your Dream Job

$
0
0

Finding your dream job can take effort, but with some innovative features, LinkedIn has made it easier. A survey of LinkedIn members uncovered that potential commute time is high on the list of factors when considering a role.

With the help of the Bing Maps APIs, the LinkedIn team has developed features like “Your Commute” bringing location intelligence to bear on the job hut.

LinkedIn - Your Commute Feature

Read the full story to learn how LinkedIn uses Bing Maps Isochrone API, Bing Maps Autosuggest API, and Bing Maps Locations API to help its members find the right job.

- Bing Maps Team


JCC Erratum Mitigation in MSVC

$
0
0

The content of this blog was provided by Gautham Beeraka from Intel Corporation.

Intel recently announced Jump Conditional Code (JCC) Erratum which can occur in some of its processors. The MSVC team has been working with Intel to provide a software fix in the compiler to mitigate the performance impact of the microcode update that prevents the erratum.

Introduction

There are three things one should know about JCC erratum:

  1. What the erratum is, if and how it affects you.
  2. Microcode update which prevents the erratum, if you have it and its side effects.
  3. MSVC compiler support to mitigate the side effects of the microcode update.

Each of these topics are explained below.

JCC Erratum

The processors listed in Intel’s white paper referenced above have an erratum which can occur under certain conditions that involve jump instructions overlaying a cache-line boundary. This erratum can result in unpredictable behavior for the software running on these processors. If your software runs on these processors, you are affected by this erratum.

Microcode Update

Applying a microcode update (MCU) can prevent JCC erratum. The MCU works by preventing the jump instructions that overlay or end on 32-byte boundary as shown in the figure below from being cached in the decoded uop cache. The MCU affects conditional jumps, macro-fused conditional jumps, direct unconditional jump, indirect jump, direct/indirect call and return.

Examples of instructions which straddle 32-bit alignment

The MCU will be distributed through Windows Update. We will update this blog once we have more information on the Windows Update. Note that the MCU is not specific to Windows and applies to other operating systems also.

Applying the MCU can regress performance of software running on the patched machines. Based on our measurements, we see an impact between 0-3%. The impact was higher on a few outlier microbenchmarks.

Software Mitigation in MSVC compiler

To mitigate the performance impact, developers can build their code with the software fix enabled by /QIntel-jcc-erratum switch in MSVC compiler. We observed that the performance regressions become negligible after rebuilding with this fix. The switch can increase code size which was about 3% based on our measurements.

How to enable the software mitigation?

Starting from Visual Studio 2019 version 16.5 Preview 2, developers can apply the software mitigation for the performance impact of the MCU. To enable software mitigation for JCC erratum for your code, simply select “Yes” under the “Code Generation” section of the project Property Pages:

Screenshot of the Enable Intel JCC Erratum Mitigation in the property pages

A few undocumented compiler flags are also available to restrict the scope of the software mitigation as shown below. These flags can be useful to experiment with, but we are not committed to service them in future releases.

  1. /d2QIntel-jcc-erratum-partial – This applies the mitigation only inside loops in a function.
  2. /d2QIntel-jcc-erratum:<file.txt> – This applies the mitigation only to functions specified within file.txt.
  3. /d2QIntel-jcc-erratum-partial:<file.txt> – This applies the mitigation only to loops in the functions specified within file.txt.

The function names given in <file.txt> are the decorated function names as used by the compiler.

To enable these flags, add them to the “Additional Options” under the “Command Line” section of the project Property Pages:

Screenshot of adding /d2Qintel-jecc-erratum-partial to the additional compiler flags

All these switches work only in release builds and are incompatible with /clr switches. In the event multiple /d2QIntel-jcc-erratum* switches have been given, full processing (all branches) is favored over partial (loop branches only) processing. If any of the switches specifies a functions file, the processing is limited to just those functions.

What does the software mitigation do?

The software mitigation in the compiler detects all affected jumps in the code (the jumps that overlay or end at 32-byte boundary) and aligns them to start at this boundary. This is done by adding benign segment override prefixes to the instructions before the jump. The size of the resultant instructions increases but is less than 15 bytes. In situations where prefixes cannot be added, NOPs are used. The example below shows how the compiler generates code when the mitigation is on and off.

Sample C++ code:

for (int i = 0; i < length; i++) {
		sum += arr[i] + c;
}

Code without /QIntel-jcc-erratum

(/O2 /FAsc)

Code with /QIntel-jcc-erratum

(/O2 /FAsc /QIntel-jcc-erratum)

$LL8@test1:

00010 44 8b 0c 91    mov r9d, DWORD PTR [rcx+rdx*4]

00014 48 ff c2           inc rdx

00017 45 03 c8         add r9d, r8d

0001a 41 03 c1         add eax, r9d

0001d 49 3b d2         cmp rdx, r10

00020 7c ee              jl SHORT $LL8@test1

$LL8@test1:

00010 3e 3e 3e 44 8b 0c 91    mov  r9d, DWORD PTR [rcx+rdx*4]

00017 48 ff c2                          inc  rdx

0001a 45 03 c8                        add  r9d, r8d

0001d 41 03 c1                        add  eax, r9d

00020 49 3b d2                        cmp  rdx, r10

00023 7c eb                             jl   SHORT $LL8@test1

 

In the example above, the CMP and JL instructions are macro-fused and overlay a 32-byte boundary. The mitigation pads the first instruction in the block, the MOV instruction with 0x3E prefix to align the CMP instruction to begin on a 32-byte boundary.

What is the performance story?

We did evaluate the performance impact of the MCU and fix in the MSVC compiler. The numbers stated below use the following test PC configuration.

Processor – Intel® Core™ i9 9900K @ 3.60GHz

Operating System – Private build of Windows with the MCU applicable to this processor.

Benchmark suite – SPEC CPU® 2017

Based on our measurements, we see regressions ranging from 0-3% after applying the MCU. We also saw regressions going up to 10% on some outlier microbenchmarks.

Applying the software mitigation through the /QIntel-jcc-erratum switch in MSVC compiler makes the regressions negligible. This switch applies the mitigation globally to all modules built with it and increases code size. We measured an average of 3% code size bloat.

We measured that applying the mitigation only in loops through the /d2QIntel-jcc-erratum-partial switch also makes the performance regressions negligible but with lesser code size increase. We measured an average of 1.5% code size bloat with the partial mitigation. You can further reduce the code size impact and get most of the performance back by applying the mitigations only to hot functions through the /d2QIntel-jcc-erratum:<file.txt> and /d2QIntel-jcc-erratum-partial:<file.txt> switches.

We also measured that the performance impact of /QIntel-jcc-erratum switch on processors that are not affected by the erratum is negligible. However, as codebases vary greatly, we advise developers to evaluate the impact of /QIntel-jcc-erratum in the context of their applications and workloads.

Closing Notes

If your software can run on the machines with processors affected by the JCC erratum and versions of Windows with the MCU, we encourage you to profile your code and check for performance regressions. You can use Windows Performance Toolkit or Intel® VTune ™ Profiler to profile your code. You can detect if the MCU is affecting performance by following steps in Intel’s white paper. If you are affected, recompile with /QIntel-jcc-erratum or other switches listed above to mitigate the effects.

Your feedback is key to deliver the best experience. If you have any questions, please feel free to ask us below. You can also send us your comments through e-mail. If you encounter problems with the experience or have suggestions for improvement, please Report A Problem or reach out via Developer Community. You can also find us on Twitter @VisualC.

 

The post JCC Erratum Mitigation in MSVC appeared first on C++ Team Blog.

Backup Explorer now available in preview

$
0
0

As organizations continue to expand their use of IT and the cloud, protecting critical enterprise data becomes extremely important. And if you are a backup admin on Microsoft Azure, being able to efficiently monitor backups on a daily basis is a key requirement to ensuring that your organization has no weaknesses in its last line of defense.

Up until now, you could use a Recovery Services vault to get a bird’s eye view of items being backed up under that vault, along with the associated jobs, policies, and alerts. But as your backup estate expands to span multiple vaults across subscriptions, regions, and tenants, monitoring this estate in real-time becomes a non-trivial task, requiring you to write your own customizations.

What if there was a simpler way to aggregate information across your entire backup estate into a single pane of glass, enabling you to quickly identify exactly where to focus your energy on?

Today, we are pleased to share the preview of Backup Explorer. Backup Explorer is a built-in Azure Monitor Workbook enabling you to have a single pane of glass for performing real-time monitoring across your entire backup estate on Azure. It comes completely out-of-the-box, with no additional costs, via native integration with Azure Resource Graph and Azure Workbooks.

Key Benefits

1) At-scale views – With Backup Explorer, monitoring is no longer limited to a Recovery Services vault. You can get an aggregated view of your entire estate from a backup perspective. This includes not only information on your backup items, but also resources that are not configured for backup, ensuring that you don’t ever miss protecting critical data in your growing estate. And if you are an Azure Lighthouse user, you can view all of this information even across multiple tenant, enabling truly boundary-less monitoring.

2) Deep drill-downs – You can quickly switch between aggregated views and highly granular data for any of your backup-related artifacts, be it backup items, jobs, alerts or policies.

3) Quick troubleshooting and actionability – The at-scale views and deep drill-downs are designed to aid you in getting to the root cause of a backup-related issue. Once you identify an issue, you can act on it by seamlessly navigating to the backup item or the Azure resource, right from Backup Explorer.

Backup Explorer is currently supported for Azure Virtual Machines. Support for other Azure workloads will be added soon.

At Azure Backup, Backup Explorer is just one part of our overall goal to enable a delightful, enterprise-ready management-at-scale experience for all our customers.

Getting Started

To get started with using Backup Explorer, you can simply navigate to any Recovery Services vault and click on Backup Explorer in the quick links section.

Backup Explorer link in Recovery Services Vault

You will be redirected to Backup Explorer which gives a view across all the vaults, subscriptions, and tenants that you have access to.

Summary tab of Backup Explorer

More information

Read the Backup Explorer documentation for detailed information on leveraging the various tabs to solve different use-cases.

Advancing safe deployment practices

$
0
0

"What is the primary cause of service reliability issues that we see in Azure, other than small but common hardware failures? Change. One of the value propositions of the cloud is that it’s continually improving, delivering new capabilities and features, as well as security and reliability enhancements. But since the platform is continuously evolving, change is inevitable. This requires a very different approach to ensuring quality and stability than the box product or traditional IT approaches — which is to test for long periods of time, and once something is deployed, to avoid changes. This post is the fifth in the series I kicked off in my July blog post that shares insights into what we're doing to ensure that Azure's reliability supports your most mission critical workloads. Today we'll describe our safe deployment practices, which is how we manage change automation so that all code and configuration updates go through well-defined stages to catch regressions and bugs before they reach customers, or if they do make it past the early stages, impact the smallest number possible. Cristina del Amo Casado from our Compute engineering team authored this posts, as she has been driving our safe deployment initiatives.” - Mark Russinovich, CTO, Azure


 

When running IT systems on-premises, you might try to ensure perfect availability by having gold-plated hardware, locking up the server room and throwing away the key. Software wise, IT would traditionally prevent as much change as possible — avoiding applying updates to the operating system or applications because they’re too critical, and pushing back on change requests from users. With everyone treading carefully around the system, this ‘nobody breathe!’ approach stifles continued system improvement, and sometimes even compromises security for systems that are deemed too crucial to patch regularly. As Mark mentioned above, this approach doesn't work for change and release management in a hyperscale public cloud like Azure. Change is both inevitable and beneficial, given the need to deploy service updates and improvements, and given our commitment to you to act quickly in the face of security vulnerabilities. As we can’t simply avoid change, Microsoft, our customers, and our partners need to acknowledge that change is expected, and we plan for it. Microsoft continues to work on making updates as transparent as possible and will deploy the changes safely as described below. Having said that, our customers and partners should also design for high availability, consume maintenance events sent by the platform to adapt as needed. Finally, in some cases, customers can take control of initiating the platform updates at a suitable time for their organization.

Changing safely

When considering how to deploy releases throughout our Azure datacenters, one of the key premises that shapes our processes is to assume that there could be an unknown problem introduced by the change being deployed, plan in a way that enables the discovery of said problem with minimal impact, and automate mitigation actions for when the problem surfaces. While a developer might judge it as completely innocuous and guarantee that it won't affect the service, even the smallest change to a system poses a risk to the stability of the system, so ‘changes’ here refers to all kinds of new releases and covers both code changes and configuration changes. In most cases a configuration change has a less dramatic impact on the behavior of a system but, just as for a code change, no configuration change is free of risk for activating a latent code defect or a new code path.

Teams across Azure follow similar processes to prevent or at least minimize impact related to changes. Firstly, by ensuring that changes meet the quality bar before the deployment starts, through test and integration validations. Then after sign off, we roll out the change in a gradual manner and measure health signals continuously, so that we can detect in relative isolation if there is any unexpected impact associated with the change that did not surface during testing. We do not want a change causing problems to ever make it to broad production, so steps are taken to ensure we can avoid that whenever possible. The gradual deployment gives us a good opportunity to detect issues at a smaller scale (or a smaller ‘blast radius’) before it causes widespread impact.

Azure approaches change automation, aligned with the high level process above, through a safe deployment practice (SDP) framework, which aims to ensure that all code and configuration changes go through a lifecycle of specific stages, where health metrics are monitored along the way to trigger automatic actions and alerts in case of any degradation detected. These stages (shown in the diagram that follows) reduce the risk that software changes will negatively affect your existing Azure workloads.

A diagram showing how the cost and impact of failures increases throughout the production rollout pipeline, and is minimized by going through rounds of development and testing, quality gates, and integration.

This shows a simplification of our deployment pipeline, starting on the left with developers modifying their code, testing it on their own systems, and pushing it to staging environments. Generally, this integration environment is dedicated to teams for a subset of Azure services that need to test the interactions of their particular components together. For example, core infrastructure teams such as compute, networking, and storage share an integration environment. Each team runs synthetic tests and stress tests on the software in that environment, iterate until stable, and then once the quality results indicate that a given release, feature, or change is ready for production they deploy the changes into the canary regions.

Canary regions

Publicly we refer to canary regions as “Early Updates Access Program” regions, and they’re effectively full-blown Azure regions with the vast majority of Azure services. One of the canary regions is built with Availability Zones and the other without it, and both regions form a region pair so that we can validate data geo-replication capabilities. These canary regions are used for full, production level, end to end validations and scenario coverage at scale. They host some first party services (for internal customers), several third party services, and a small set of external customers that we invite into the program to help increase the richness and complexity of scenarios covered, all to ensure that canary regions have patterns of usage representative of our public Azure regions. Azure teams also run stress and synthetic tests in these environments, and periodically we execute fault injections or disaster recovery drills at the region or Availability Zone level, to practice the detection and recovery workflows that would be run if this occurred in real life. Separately and together, these exercises attempt to ensure that software is of the highest quality before the changes touch broad customer workloads in Azure.

Pilot phase

Once the results from canary indicate that there are no known issues detected, the progressive deployment to production can get started, beginning with what we call our pilot phase. This phase enables us to try the changes, still at a relatively small scale, but with more diversity of hardware and configurations. This phase is especially important for software like core storage services and core compute infrastructure services, that have hardware dependencies. For example, Azure offers servers with GPU's, large memory servers, commodity servers, multiple generations and types of processors, Infiniband, and more, so this enables flighting the changes and may enable detection of issues that would not surface during the smaller scale testing. In each step along the way, thorough health monitoring and extended 'bake times' enable potential failure patterns to surface, and increase our confidence in the changes while greatly reducing the overall risk to our customers.

Once we determine that the results from the pilot phase are good, the deployment systems proceed by allowing the change to progress to more and more regions incrementally. Throughout the deployment to the broader Azure regions, the deployment systems endeavor to respect Availability Zones (a change only goes to one Availability Zone within a region) and region pairing (every region is ‘paired up’ with a second region for georedundant storage) so a change deploys first to a region and then to its pair. In general, the changes deploy only as long as no negative signals surface.

Safe deployment practices in action

Given the scale of Azure globally, the entire rollout process is completely automated and driven by policy. These declarative policies and processes (not the developers) determine how quickly software can be rolled out. Policies are defined centrally and include mandatory health signals for monitoring the quality of software as well as mandatory ‘bake times’ between the different stages outlined above. The reason to have software sitting and baking for different periods of time across each phase is to make sure to expose the change to a full spectrum of load on that service. For example, diverse organizational users might be coming online in the morning, gaming customers might be coming online in the evening, and new virtual machines (VMs) or resource creations from customers may occur over an extended period of time.

Global services, which cannot take the approach of progressively deploying to different clusters, regions, or service rings, also practice a version of progressive rollouts in alignment with SDP. These services follow the model of updating their service instances in multiple phases, progressively deviating traffic to the updated instances through Azure Traffic Manager. If the signals are positive, more traffic gets deviated over time to updated instances, increasing confidence and unblocking the deployment from being applied to more service instances over time.

Of course, the Azure platform also has the ability to deploy a change simultaneously to all of Azure, in case this is necessary to mitigate an extremely critical vulnerability. Although our safe deployment policy is mandatory, we can choose to accelerate it when certain emergency conditions are met. For example, to release a security update that requires us to move much more quickly than we normally would, or for a fix where the risk of regression is overcome by the fix mitigating a problem that’s already very impactful to customers. These exceptions are very rare, in general our deployment tools and processes intentionally sacrifice velocity to maximize the chance for signals to build up and scenarios and workflows to be exercised at scale, thus creating the opportunity to discover issues at the smallest possible scale of impact.

Continuing improvements

Our safe deployment practices and deployment tooling continue to evolve with learnings from previous outages and maintenance events, and in line with our goal of detecting issues at a significantly smaller scale. For example, we have learned about the importance of continuing to enrich our health signals and about using machine learning to better correlate faults and detect anomalies. We also continue to improve the way in which we do pilots and flighting, so that we can cover more diversity of hardware with smaller risk. We continue to improve our ability to rollback changes automatically if they show potential signs of problems. We also continue to invest in platform features that reduce or eliminate the impact of changes generally.

With over a thousand new capabilities released in the last year, we know that the pace of change in Azure can feel overwhelming. As Mark mentioned, the agility and continual improvement of cloud services is one of the key value propositions of the cloud – change is a feature, not a bug. To learn about the latest releases, we encourage customers and partners to stay in the know at Azure.com/Updates. We endeavor to keep this as the single place to learn about recent and upcoming Azure product updates, including the roadmap of innovations we have in development. To understand the regions in which these different services are available, or when they will be available, you can also use our tool at Azure.com/ProductsbyRegion.

Visual Studio Code January 2020

Code Navigation for CMake Scripts

$
0
0

Visual Studio 2019 16.5 Preview 2 makes it easy to make sense of complex CMake projects. Code navigation features such as Go To Definition and Find All References are now supported for variables, functions, and targets in CMake script files. This can be a huge timesaver because CMake projects with more than a handful of source files are often organized into several CMake scripts to encapsulate each part of the project.

These navigation features work across your entire CMake project to offer more productivity than naïve text search across files and folders. They are also integrated with other IDE productivity features such as Peek Definition.

Go To Definition:

Go To Definition with Peek on a CMake variable.

Find All References:

Find All References working across a CMake project.

You can configure the in-editor documentation and navigation features for CMake scripts in Tools > Options > CMake > Language Services:

CMake script language service settings in “Tools > Options > CMake > Language Services”.

Send Us Feedback

Please try out the latest preview and let us know if you have any feedback. It is always appreciated! The best way to get in touch with us about an issue or suggestion is though Developer Community with the “Report a Problem” or “Suggest a Feature” tools. This makes it easy for us to follow up and for you to get the latest updates about our progress. Feel free to comment here or send an email to cmake@microsoft.com with questions as well.

The post Code Navigation for CMake Scripts appeared first on C++ Team Blog.

Easily Add, Remove, and Rename Files and Targets in CMake Projects

$
0
0

It’s easier than ever to work with CMake projects in Visual Studio 2019 16.5 Preview 2. Now you can add, remove, and rename source files and targets in your CMake projects from the IDE without manually editing your CMake scripts. When you add or remove files with the Solution Explorer, Visual Studio will automatically edit your CMake project. You can also add, remove, and rename the project’s targets from the Solution Explorer’s targets view.

“Add > New Item” in Solution Explorer’s targets view.

C and C++ Source Files

Visual Studio now tracks C and C++ source files as they are added, renamed, or removed from the Solution Explorer, automatically modifying the underlying CMake project. This feature is enabled by default as of Visual Studio 2019 16.5 Preview 2, but if you would prefer to Visual Studio to not automatically modify your project it can be turned off in Tools > Options > CMake, “Enable automatic CMake script modification…”:

Tools > Options > CMake, “Enable automatic CMake script modification.”

Targets and References

The CMake targets view offers even more functionality. From here, in addition to adding and removing files, you can add, rename, and remove targets. You can access the CMake targets view by clicking on the Solution Explorer’s drop-down menu to the right of the home button:

Access the CMake targets view from the Solution Explorer’s drop-down to the right of the home button.

If you have worked with solutions generated by CMake, this view will look familiar – but unlike a generated solution you will be able to change the underlying CMake project directly in the IDE. Visual Studio currently supports modifying the following:

1. Adding, removing, renaming source files in a target:

CMake Targets View Add Item Menu.

2. Adding, removing, renaming targets in a CMake project:

Add Target Menu.

Add Target Dialog.

3. Viewing and creating references between targets in the project:

Add reference dialog.

CMake references list.

Resolving Ambiguity

In some cases, there may be more than one place where it makes sense to add a source file to a CMake script. When this happens, Visual Studio will ask you where you want to make the change and display a preview of the proposed modifications:

Resolve ambiguity with the preview changes dialog..

Send Us Feedback

Please try out the latest preview and let us know if you have any feedback. It is always appreciated! The best way to get in touch with us about an issue or suggestion is though Developer Community with the “Report a Problem” or “Suggest a Feature” tools. This makes it easy for us to follow up and for you to get the latest updates about our progress. Feel free to comment here or send an email to cmake@microsoft.com with questions as well.

The post Easily Add, Remove, and Rename Files and Targets in CMake Projects appeared first on C++ Team Blog.

GC Handles

$
0
0

A customer asked me about analyzing perf related to GC handles. I feel like aside from pinned handles in general handles are not talked about much so this topic warrants some explanation, especially since this is a user facing feature.

For some background info, GC handles are indeed generational so when we are doing ephemeral GCs we only need to scan handles in the generations we are collecting. However there are complications associated with handles’ ages because of the way handles are organized – we don’t organize by individual handles, we organize them in groups and each group needs to have the same age so if we need to set one handle to be younger we set all of them to be younger so they will all get reported for that younger generation. But this is hard to quantify so I would definitely recommend measuring with the tools we provide.

Handles are exposed in various ways. The way that’s perhaps the most familiar to most folks is via the GCHandle type. Only 4 types are exposed this way: Normal, Pinned, Weak and WeakTrackResurrection. Weak and WeakTrackResurrection types are internally called short and long weak handles. But other types are used via BCL such as the dependent handle type which is used in ConditionalWeakTable (and yes, I am aware that there’s desire to expose this type directly as GCHandle, I’ll touch more on this topic below).

Historically GC handles were considered as an “add-on” thing so we didn’t expect many of these. But as with many other things, they evolve. I’ve been seeing handle usage in general going up by quite a lot (part of it is due to libraries using handles more and more). And currently, we collect handle info in ETW events but not very detailed so I do plan to make the diagnostics info on handles more detailed.

Right now what we offer is:

  • # of pinned objects promoted in this GC as a column for each GC in the GCStats view in PerfView. This is the number of pinned objects that GC observed in that GC including from pinned handles (async pinned handles used by IO) or pinned by the stack.
  • In the “Raw Data XML file (for debugging)” link in the GCStats view you can generate a file with more detailed info per GC including handle info; you’ll see something like this:

<GCEvent GCNumber="358" GCGeneration="0" …[many fields omitted] Reason= "AllocSmall">
<HeapStats GenerationSize0="77,228,592"…[many fields omitted] PinnedObjectCount="1" SinkBlockCount="37,946" GCHandleCount="4,552,434"/>

The PinnedObjectCount is also shown here. Then there’s another field called GCHandleCount. As you would guess, GCHandleCount includes not just pinned handles but all handles. Also there’s another significant difference which is this is for all handles whereas PinnedObjectCount is only for what that GC sees so you could have a lot more pinned handles but since only the ones for that generation are reported PinnedObjectCount only includes those (plus stack pinned objects). Another thing worth mentioning is we track GCHandleCount “loosely” in coreclr, as in, we don’t use Interlocked inc/dec for perf reasons. We just do ++/– so this count is a rough figure but gives you a good idea perf wise (eg, in the example shown, that’s a lot of handles and definitely worth some investigation).

  • Of course if you use the TraceEvent library you can get all the above info on the TraceGC class programmatically.

As you can see this highlights the pinned objects, not other types of handles like the weak GC handles used by WeakReference, dependent handles used by ConditionalWeakTable uses or the SizedRef handles used by asp.net in full framework (no longer exists on coreclr so I’ll not cover them here). There are other types as shown in gcgcinterface.h but they are used internally by the runtime.

Before we provide more easily consumable info, one fairly easy thing you could do is look at the CPU profiles to see if you should be worried about the usage of these handles – short WeakReferences are scanned with this method:

GCScan::GcShortWeakPtrScan (in case it’s inlined it calls Ref_CheckAlive)

long WeakReferences are scanned with:

GCScan::GcWeakPtrScan (in case it’s inlined it calls Ref_CheckReachable)

Dependent handles are scanned with:

gc_heap::scan_dependent_handles in blocking GCs
gc_heap::background_scan_dependent_handles in BGCs

I know this is not ideal but it’s one way to get the info. One thing worth mentioning is these are currently all done during the STW pause for BGCs. And dependent handle scanning is currently done in a not very efficient way which is the main reason why I haven’t exposed this directly as a GCHandle type. I have a design in place to make the perf much better. When we have it implemented we will make this handle type public.

The PerfView commandline to collect events for creating/destroying handles –
perfview /nogui /KernelEvents=default /ClrEvents:GC+Stack+GCHandle /clrEventLevel=Informational collect

The post GC Handles appeared first on .NET Blog.


.NET Interactive is here! | .NET Notebooks Preview 2

$
0
0

In November 2019, we announced .NET support for Jupyter notebooks with both C# and F# support. Today we are excited to announce Preview 2 of the .NET Notebook experience.

What’s new

New Name – Meet .NET interactive

As our scenarios grew in Try .NET, we wanted a new name that encompassed all our new experiences from the runnable snippets on the web powered by Blazor (as seen on the .NET page) , to interactive documentation for .NET Core with the dotnet try global tool, to .NET Notebooks.

Today we are announcing our official name change to .NET interactive.

.NET interactive is a group of CLI tools and APIs that enable users to create interactive experiences across the web, markdown, and notebooks.

.NET Interactive Breakdown

  • dotnet interactive global tool : For .NET Notebooks (Jupyter and nteract)
  • dotnet try global tool : For Workshops and offline docs. Interactive markdown with a backing project.
  • trydotnet.js API (not publicly available yet): Online documentation. For example, on docs and .NET page. Currently, only used internally at Microsoft.

New Repo – dotnet/interactive

Moving forward, we have decided to split dotnet try and dotnet interactive tools into separate repos.

  • For any issues, feature requests, and contributions to .NET Notebooks, please visit the .NET Interactive repo.
  • For any issues, feature requests, and contributions on interactive markdown and trydotnet.js, please visit the Try .NET repo.

New Global Tool – dotnet interactive

How Install .NET Interactive

First, make sure you have the following installed:

  • The .NET 3.1 SDK.
  • Jupyter. Jupyter can be installed using Anaconda.

  • Open the Anaconda Prompt (Windows) or Terminal (macOS) and verify that Jupyter is installed and present on the path:

> jupyter kernelspec list
  python3        ~jupyterkernelspython3
  • Next, in an ordinary console, install the dotnet interactive global tool:
> dotnet tool install --global Microsoft.dotnet-interactive
  • Install the .NET kernel by running the following within your Anaconda Prompt:
> dotnet interactive jupyter install
[InstallKernelSpec] Installed kernelspec .net-csharp in ~jupyterkernels.net-csharp
.NET kernel installation succeeded

[InstallKernelSpec] Installed kernelspec .net-fsharp in ~jupyterkernels.net-fsharp
.NET kernel installation succeeded

[InstallKernelSpec] Installed kernelspec .net-powershell in ~jupyterkernels.net-powershell
.NET kernel installation succeeded
  • You can verify the installation by running the following again in the Anaconda Prompt:
> jupyter kernelspec list
  .net-csharp    ~jupyterkernels.net-csharp
  .net-fsharp    ~jupyterkernels.net-fsharp
  .net-powershell ~jupyterkernels.net-powershell
  python3        ~jupyterkernelspython3

Please Note: If you are looking for dotnet try experience please visit dotnet/try.

New language support – PowerShell

PowerShell Notebooks

PowerShell notebooks combine the management capabilities of PowerShell with the rich visual experience of notebooks. The integration of PowerShell’s executable experience with rich text and visualization open up scenarios for PowerShell users to integrate and amplify their teaching, and support documents. As an example, this demo of a new PowerShell feature was easily transformed into a shareable, interactive teaching tool.

With the multi-kernel experience provided by the .NET interactive kernel a single notebook, now with PowerShell support, can efficiently target both the management plane and the data plane.

DBAs, sysadmins, and support engineers alike have found PowerShell notebooks useful for resource manipulation and management. For example, this notebook teachers the user how to create an Azure VM from PowerShell.

We look forward to seeing what our customers to do with this experience. Read the PowerShell blog post for more information.

Run .NET Code in nteract.io

nteract animated logo

In addition to writing .NET Code in Jupyter Notebooks, users can now write their code in nteract. nteract is an open-source organization that builds SDKs, applications, and libraries that helps people make the most of interactive notebooks and REPLs. We are excited to have our .NET users take advantage of the rich REPL experience nteract provides,including the nteract desktop app.

To get started with .NET Interactive in nteract please download the nteract desktop app and install the .NET kernels.

Resources

Our team can’t wait to see what you do with .NET Interactive. Please check out our repo to learn more and let us know what you build.

Happy interactive programming !

The post .NET Interactive is here! | .NET Notebooks Preview 2 appeared first on .NET Blog.

Announcing TypeScript 3.8 RC

$
0
0

Today we’re announcing the Release Candidate for TypeScript 3.8! Between this RC and our final release, we expect no changes apart from critical bug fixes.

To get started using the RC, you can get it through NuGet, or through npm with the following command:

npm install typescript@rc

You can also get editor support by

TypeScript 3.8 brings a lot of new features, including new or upcoming ECMAScript standards features, new syntax for importing/exporting only types, and more.

Type-Only Imports and Export

This feature is something most users may never have to think about; however, if you’ve hit issues here, it might be of interest (especially when compiling under --isolatedModules, our transpileModule API, or Babel).

TypeScript reuses JavaScript’s import syntax in order to let us reference types. For instance, in the following example, we’re able to import doThing which is a JavaScript value along with Options which is purely a TypeScript type.

// ./foo.ts
interface Options {
    // ...
}

export function doThing(options: Options) {
    // ...
}

// ./bar.ts
import { doThing, Options } from "./foo.js";

function doThingBetter(options: Options) {
    // do something twice as good
    doThing(options);
    doThing(options);
}

This is convenient because most of the time we don’t have to worry about what’s being imported – just that we’re importing something.

Unfortunately, this only worked because of a feature called import elision. When TypeScript outputs JavaScript files, it sees that Options is only used as a type, and it automatically drops its import. The resulting output looks kind of like this:

// ./foo.js
export function doThing(options: Options) {
    // ...
}

// ./bar.js
import { doThing } from "./foo.js";

function doThingBetter(options: Options) {
    // do something twice as good
    doThing(options);
    doThing(options);
}

Again, this behavior is usually great, but it causes some other problems.

First of all, there are some places where it’s ambiguous whether a value or a type is being exported. For example, in the following example is MyThing a value or a type?

import { MyThing } from "./some-module.js";

export { MyThing };

Limiting ourselves to just this file, there’s no way to know. Both Babel and TypeScript’s transpileModule API will emit code that doesn’t work correctly if MyThing is only a type, and TypeScript’s isolatedModules flag will warn us that it’ll be a problem. The real problem here is that there’s no way to say “no, no, I really only meant the type – this should be erased”, so import elision isn’t good enough.

The other issue was that TypeScript’s import elision would get rid of import statements that only contained imports used as types. That caused observably different behavior for modules that have side-effects, and so users would have to insert a second import statement purely to ensure side-effects.

// This statement will get erased because of import elision.
import { SomeTypeFoo, SomeOtherTypeBar } from "./module-with-side-effects";

// This statement always sticks around.
import "./module-with-side-effects";

A concrete place where we saw this coming up was in frameworks like Angular.js (1.x) where services needed to be registered globally (which is a side-effect), but where those services were only imported for types.

// ./service.ts
export class Service {
    // ...
}
register("globalServiceId", Service);

// ./consumer.ts
import { Service } from "./service.js";

inject("globalServiceId", function (service: Service) {
    // do stuff with Service
});

As a result, ./service.js will never get run, and things will break at runtime.

To avoid this class of issues, we realized we needed to give users more fine-grained control over how things were getting imported/elided.

As a solution in TypeScript 3.8, we’ve added a new syntax for type-only imports and exports.

import type { SomeThing } from "./some-module.js";

export type { SomeThing };

import type only imports declarations to be used for type annotations and declarations. It always gets fully erased, so there’s no remnant of it at runtime. Similarly, export type only provides an export that can be used for type contexts, and is also erased from TypeScript’s output.

It’s important to note that classes have a value at runtime and a type at design-time, and the use is very context-sensitive. When using import type to import a class, you can’t do things like extend from it.

import type { Component } from "react";

interface ButtonProps {
    // ...
}

class Button extends Component<ButtonProps> {
    //               ~~~~~~~~~
    // error! 'Component' only refers to a type, but is being used as a value here.

    // ...
}

If you’ve used Flow before, the syntax is fairly similar. One difference is that we’ve added a few restrictions to avoid code that might appear ambiguous.

// Is only 'Foo' a type? Or every declaration in the import?
// We just give an error because it's not clear.

import type Foo, { Bar, Baz } from "some-module";
//     ~~~~~~~~~~~~~~~~~~~~~~
// error! A type-only import can specify a default import or named bindings, but not both.

In conjunction with import type, we’ve also added a new compiler flag to control what happens with imports that won’t be utilized at runtime: importsNotUsedAsValues. This flag takes 3 different values:

  • remove: this is today’s behavior of dropping these imports. It’s going to continue to be the default, and is a non-breaking change.
  • preserve: this preserves all imports whose values are never used. This can cause imports/side-effects to be preserved.
  • error: this preserves all imports (the same as the preserve option), but will error when a value import is only used as a type. This might be useful if you want to ensure no values are being accidentally imported, but still make side-effect imports explicit.

For more information about the feature, you can take a look at the pull request, and some of the relevant changes that we’ve made since the beta release.

ECMAScript Private Fields

TypeScript 3.8 brings support for ECMAScript’s private fields, part of the stage-3 class fields proposal. This work was started and driven to completion by our good friends at Bloomberg!

class Person {
    #name: string

    constructor(name: string) {
        this.#name = name;
    }

    greet() {
        console.log(`Hello, my name is ${this.#name}!`);
    }
}

let jeremy = new Person("Jeremy Bearimy");

jeremy.#name
//     ~~~~~
// Property '#name' is not accessible outside class 'Person'
// because it has a private identifier.

Unlike regular properties (even ones declared with the private modifier), private fields have a few rules to keep in mind. Some of them are:

  • Private fields start with a # character. Sometimes we call these private names.
  • Every private field name is uniquely scoped to its containing class.
  • TypeScript accessibility modifiers like public or private can’t be used on private fields.
  • Private fields can’t be accessed or even detected outside of the containing class – even by JS users! Sometimes we call this hard privacy.

Apart from “hard” privacy, another benefit of private fields is that uniqueness we just mentioned. For example, regular property declarations are prone to being overwritten in subclasses.

class C {
    foo = 10;

    cHelper() {
        return this.foo;
    }
}

class D extends C {
    foo = 20;

    dHelper() {
        return this.foo;
    }
}

let instance = new D();
// 'this.foo' refers to the same property on each instance.
console.log(instance.cHelper()); // prints '20'
console.log(instance.dHelper()); // prints '20'

With private fields, you’ll never have to worry about this, since each field name is unique to the containing class.

class C {
    #foo = 10;

    cHelper() {
        return this.#foo;
    }
}

class D extends C {
    #foo = 20;

    dHelper() {
        return this.#foo;
    }
}

let instance = new D();
// 'this.#foo' refers to a different field within each class.
console.log(instance.cHelper()); // prints '10'
console.log(instance.dHelper()); // prints '20'

Another thing worth noting is that accessing a private field on any other type will result in a TypeError!

class Square {
    #sideLength: number;

    constructor(sideLength: number) {
        this.#sideLength = sideLength;
    }

    equals(other: any) {
        return this.#sideLength === other.#sideLength;
    }
}

const a = new Square(100);
const b = { sideLength: 100 };

// Boom!
// TypeError: attempted to get private field on non-instance
// This fails because 'b' is not an instance of 'Square'.
console.log(a.equals(b));

Finally, for any plain .js file users, private fields always have to be declared before they’re assigned to.

class C {
    // No declaration for '#foo'
    // :(

    constructor(foo: number) {
        // SyntaxError!
        // '#foo' needs to be declared before writing to it.
        this.#foo = foo;
    }
}

JavaScript has always allowed users to access undeclared properties, whereas TypeScript has always required declarations for class properties. With private fields, declarations are always needed regardless of whether we’re working in .js or .ts files.

class C {
    /** @type {number} */
    #foo;

    constructor(foo: number) {
        // This works.
        this.#foo = foo;
    }
}

For more information about the implementation, you can check out the original pull request

Which should I use?

We’ve already received many questions on which type of privates you should use as a TypeScript user: most commonly, “should I use the private keyword, or ECMAScript’s hash/pound (#) private fields?”

Like all good questions, the answer is not good: it depends!

When it comes to properties, TypeScript’s private modifiers are fully erased – that means that while the data will be there, nothing is encoded in your JavaScript output about how the property was declared. At runtime, it acts entirely like a normal property. That means that when using the private keyword, privacy is only enforced at compile-time/design-time, and for JavaScript consumers, it’s entirely intent-based.

class C {
    private foo = 10;
}

// This is an error at compile time,
// but when TypeScript outputs .js files,
// it'll run fine and print '10'.
console.log(new C().foo);    // prints '10'
//                  ~~~
// error! Property 'foo' is private and only accessible within class 'C'.

// TypeScript allows this at compile-time
// as a "work-around" to avoid the error.
console.log(new C()["foo"]); // prints '10'

The upside is that this sort of “soft privacy” can help your consumers temporarily work around not having access to some API, and works in any runtime.

On the other hand, ECMAScript’s # privates are completely inaccessible outside of the class.

class C {
    #foo = 10;
}

console.log(new C().#foo); // SyntaxError
//                  ~~~~
// TypeScript reports an error *and*
// this won't work at runtime!

console.log(new C()["#foo"]); // prints undefined
//          ~~~~~~~~~~~~~~~
// TypeScript reports an error under 'noImplicitAny',
// and this prints 'undefined'.

This hard privacy is really useful for strictly ensuring that nobody can take use of any of your internals. If you’re a library author, removing or renaming a private field should never cause a breaking change.

As we mentioned, another benefit is that subclassing can be easier with ECMAScript’s # privates because they really are private. When using ECMAScript # private fields, no subclass ever has to worry about collisions in field naming. When it comes to TypeScript’s private property declarations, users still have to be careful not to trample over properties declared in superclasses.

Finally, something to consider is where you intend for your code to run. TypeScript currently can’t support this feature unless targeting ECMAScript 2015 (ES6) targets or higher. This is because our downleveled implementation uses WeakMaps to enforce privacy, and WeakMaps can’t be polyfilled in a way that doesn’t cause memory leaks. In contrast, TypeScript’s private-declared properties work with all targets – even ECMAScript 3!

Kudos!

It’s worth reiterating how much work went into this feature from our contributors at Bloomberg. They were diligent in taking the time to learn to contribute features to the compiler/language service, and paid close attention to the ECMAScript specification to test that the feature was implemented in compliant manner. They even improved another 3rd party project, CLA Assistant, which made contributing to TypeScript even easier.

We’d like to extend a special thanks to:

export * as ns Syntax

It’s often common to have a single entry-point that exposes all the members of another module as a single member.

import * as utilities from "./utilities.js";
export { utilities };

This is so common that ECMAScript 2020 recently added a new syntax to support this pattern!

export * as utilities from "./utilities.js";

This is a nice quality-of-life improvement to JavaScript, and TypeScript 3.8 implements this syntax. When your module target is earlier than es2020, TypeScript will output something along the lines of the first code snippet.

Special thanks to community member Wenlu Wang (Kingwl) who implemented this feature! For more information, check out the original pull request.

Top-Level await

Most modern environments that provide I/O in JavaScript (like HTTP requests) is asynchronous, and many modern APIs return Promises. While this has a lot of benefits in making operations non-blocking, it makes certain things like loading files or external content surprisingly tedious.

fetch("...")
    .then(response => response.text())
    .then(greeting => { console.log(greeting) });

To avoid .then chains with Promises, JavaScript users often introduced an async function in order to use await, and then immediately called the function after defining it.

async function main() {
    const response = await fetch("...");
    const greeting = await response.text();
    console.log(greeting);
}

main()
    .catch(e => console.error(e))

To avoid introducing an async function, we can use a handy upcoming ECMAScript feature called “top-level await“.

Previously in JavaScript (along with most other languages with a similar feature), await was only allowed within the body of an async function. However, with top-level await, we can use await at the top level of a module.

const response = await fetch("...");
const greeting = await response.text();
console.log(greeting);

// Make sure we're a module
export {};

Note there’s a subtlety: top-level await only works at the top level of a module, and files are only considered modules when TypeScript finds an import or an export. In some basic cases, you might need to write out export {} as some boilerplate to make sure of this.

Top level await may not work in all environments where you might expect at this point. Currently, you can only use top level await when the target compiler option is es2017 or above, and module is esnext or system. Support within several environments and bundlers may be limited or may require enabling experimental support.

For more information on our implementation, you can check out the original pull request.

es2020 for target and module

Thanks to Kagami Sascha Rosylight (saschanaz), TypeScript 3.8 supports es2020 as an option for module and target. This will preserve newer ECMAScript 2020 features like optional chaining, nullish coalescing, export * as ns, and dynamic import(...) syntax. It also means bigint literals now have a stable target below esnext.

JSDoc Property Modifiers

TypeScript 3.8 supports JavaScript files by turning on the allowJs flag, and also supports type-checking those JavaScript files via the checkJs option or by adding a // @ts-check comment to the top of your .js files.

Because JavaScript files don’t have dedicated syntax for type-checking, TypeScript leverages JSDoc. TypeScript 3.8 understands a few new JSDoc tags for properties.

First are the accessibility modifiers: @public, @private, and @protected. These tags work exactly like public, private, and protected respectively work in TypeScript.

// @ts-check

class Foo {
    constructor() {
        /** @private */
        this.stuff = 100;
    }

    printStuff() {
        console.log(this.stuff);
    }
}

new Foo().stuff;
//        ~~~~~
// error! Property 'stuff' is private and only accessible within class 'Foo'.
  • @public is always implied and can be left off, but means that a property can be reached from anywhere.
  • @private means that a property can only be used within the containing class.
  • @protected means that a property can only be used within the containing class, and all derived subclasses, but not on dissimilar instances of the containing class.

Next, we’ve also added the @readonly modifier to ensure that a property is only ever written to during initialization.

// @ts-check

class Foo {
    constructor() {
        /** @readonly */
        this.stuff = 100;
    }

    writeToStuff() {
        this.stuff = 200;
        //   ~~~~~
        // Cannot assign to 'stuff' because it is a read-only property.
    }
}

new Foo().stuff++;
//        ~~~~~
// Cannot assign to 'stuff' because it is a read-only property.

Better Directory Watching and watchOptions

TypeScript 3.8 ships a new strategy for watching directories, which is crucial for efficiently picking up changes to node_modules.

For some context, on operating systems like Linux, TypeScript installs directory watchers (as opposed to file watchers) on node_modules and many of its subdirectories to detect changes in dependencies. This is because the number of available file watchers is often eclipsed by the of files in node_modules, whereas there are way fewer directories to track.

Older versions of TypeScript would immediately install directory watchers on folders, and at startup that would be fine; however, during an npm install, a lot of activity will take place within node_modules and that can overwhelm TypeScript, often slowing editor sessions to a crawl. To prevent this, TypeScript 3.8 waits slightly before installing directory watchers to give these highly volatile directories some time to stabilize.

Because every project might work better under different strategies, and this new approach might not work well for your workflows, TypeScript 3.8 introduces a new watchOptions field in tsconfig.json and jsconfig.json which allows users to tell the compiler/language service which watching strategies should be used to keep track of files and directories.

{
    // Some typical compiler options
    "compilerOptions": {
        "target": "es2020",
        "moduleResolution": "node",
        // ...
    },

    // NEW: Options for file/directory watching
    "watchOptions": {
        // Use native file system events for files and directories
        "watchFile": "useFsEvents",
        "watchDirectory": "useFsEvents",

        // Poll files for updates more frequently
        // when they're updated a lot.
        "fallbackPolling": "dynamicPriority"
    }
}

watchOptions contains 4 new options that can be configured:

  • watchFile: the strategy for how individual files are watched. This can be set to
    • fixedPollingInterval: Check every file for changes several times a second at a fixed interval.
    • priorityPollingInterval: Check every file for changes several times a second, but use heuristics to check certain types of files less frequently than others.
    • dynamicPriorityPolling: Use a dynamic queue where less-frequently modified files will be checked less often.
    • useFsEvents (the default): Attempt to use the operating system/file system’s native events for file changes.
    • useFsEventsOnParentDirectory: Attempt to use the operating system/file system’s native events to listen for changes on a file’s containing directories. This can use fewer file watchers, but might be less accurate.
  • watchDirectory: the strategy for how entire directory trees are watched under systems that lack recursive file-watching functionality. This can be set to:
    • fixedPollingInterval: Check every directory for changes several times a second at a fixed interval.
    • dynamicPriorityPolling: Use a dynamic queue where less-frequently modified directories will be checked less often.
    • useFsEvents (the default): Attempt to use the operating system/file system’s native events for directory changes.
  • fallbackPolling: when using file system events, this option specifies the polling strategy that gets used when the system runs out of native file watchers and/or doesn’t support native file watchers. This can be set to
    • fixedPollingInterval: (See above.)
    • priorityPollingInterval: (See above.)
    • dynamicPriorityPolling: (See above.)
  • synchronousWatchDirectory: Disable deferred watching on directories. Deferred watching is useful when lots of file changes might occur at once (e.g. a change in node_modules from running npm install), but you might want to disable it with this flag for some less-common setups.

For more information on these changes, head over to GitHub to see the pull request to read more.

“Fast and Loose” Incremental Checking

TypeScript’s --watch mode and --incremental mode can help tighten the feedback loop for projects. Turning on --incremental mode makes TypeScript keep track of which files can affect others, and on top of doing that, --watch mode keeps the compiler process open and reuses as much information in memory as possible.

However, for much larger projects, even the dramatic gains in speed that these options afford us isn’t enough. For example, the Visual Studio Code team had built their own build tool around TypeScript called gulp-tsb which would be less accurate in assessing which files needed to be rechecked/rebuilt in its watch mode, and as a result, could provide drastically low build times.

Sacrificing accuracy for build speed, for better or worse, is a tradeoff many are willing to make in the TypeScript/JavaScript world. Lots of users prioritize tightening their iteration time over addressing the errors up-front. As an example, it’s fairly common to build code regardless of the results of type-checking or linting.

TypeScript 3.8 introduces a new compiler option called assumeChangesOnlyAffectDirectDependencies. When this option is enabled, TypeScript will avoid rechecking/rebuilding all truly possibly-affected files, and only recheck/rebuild files that have changed as well as files that directly import them.

For example, consider a file fileD.ts that imports fileC.ts that imports fileB.ts that imports fileA.ts as follows:

fileA.ts <- fileB.ts <- fileC.ts <- fileD.ts

In --watch mode, a change in fileA.ts would typically mean that TypeScript would need to at least re-check fileB.ts, fileC.ts, and fileD.ts. Under assumeChangesOnlyAffectDirectDependencies, a change in fileA.ts means that only fileA.ts and fileB.ts need to be re-checked.

In a codebase like Visual Studio Code, this reduced rebuild times for changes in certain files from about 14 seconds to about 1 second. While we don’t necessarily recommend this option for all codebases, you might be interested if you have an extremely large codebase and are willing to defer full project errors until later (e.g. a dedicated build via a tsconfig.fullbuild.json or in CI).

For more details, you can see the original pull request.

Breaking Changes

TypeScript 3.8 contains a few minor breaking changes that should be noted.

Stricter Assignability Checks to Unions with Index Signatures

Previously, excess properties were unchecked when assigning to unions where any type had an index signature – even if that excess property could never satisfy that index signature. In TypeScript 3.8, the type-checker is stricter, and only “exempts” properties from excess property checks if that property could plausibly satisfy an index signature.

const obj1: { [x: string]: number } | { a: number };

obj1 = { a: 5, c: 'abc' }
//             ~
// Error!
// The type '{ [x: string]: number }' no longer exempts 'c'
// from excess property checks on '{ a: number }'.

let obj2: { [x: string]: number } | { [x: number]: number };

obj2 = { a: 'abc' };
//       ~
// Error!
// The types '{ [x: string]: number }' and '{ [x: number]: number }' no longer exempts 'a'
// from excess property checks against '{ [x: number]: number }',
// and it *is* sort of an excess property because 'a' isn't a numeric property name.
// This one is more subtle.

object in JSDoc is No Longer any Under noImplicitAny

Historically, TypeScript’s support for checking JavaScript has been lax in certain ways in order to provide an approachable experience.

For example, users often used Object in JSDoc to mean, “some object, I dunno what”, we’ve treated it as any.

// @ts-check

/**
 * @param thing {Object} some object, i dunno what
 */
function doSomething(thing) {
    let x = thing.x;
    let y = thing.y;
    thing();
}

This is because treating it as TypeScript’s Object type would end up in code reporting uninteresting errors, since the Object type is an extremely vague type with few capabilities other than methods like toString and valueOf.

However, TypeScript does have a more useful type named object (notice that lowercase o). The object type is more restrictive than Object, in that it rejects all primitive types like string, boolean, and number. Unfortunately, both Object and object were treated as any in JSDoc.

Because object can come in handy and is used significantly less than Object in JSDoc, we’ve removed the special-case behavior in JavaScript files when using noImplicitAny so that in JSDoc, the object type really refers to the non-primitive object type.

What’s Next?

As you can see on our current Iteration Plan, the final release of TypeScript 3.8 is only a few weeks out, but it’s crucial that we get feedback about the RC before we release. As editor features we’ve developed become more mature, we’ll also show off functionality like Call Hierarchy and the “convert to template string” refactoring (which you can try now in Visual Studio Code Insiders). We would love it if you and your team could give it a try today and file an issue if you run into anything.

So download the RC today! And happy hacking!

– Daniel Rosenwasser and the TypeScript Team

The post Announcing TypeScript 3.8 RC appeared first on TypeScript.

Visual Studio Code CMake Tools Extension: Multi-root workspaces and file-based API

$
0
0

The February 2020 update of the Visual Studio Code CMake Tools extension is now available. This release includes two of the extension’s top feature requests: file-based API support and multi-root workspaces. For a full list of this release’s improvements check out the release notes on GitHub.

Multi-root workspace support

The latest release of the CMake Tools extension comes with support for multi-root workspaces. This means you can have two or more folders containing a root CMakeLists.txt open side-by-side in Visual Studio Code. When a workspace contains multiple folders, the CMake Tools extension will display the active folder in the left-hand side of the status bar. The active folder is the folder to which all CMake-specific commands (configure, build, debug, etc.) are applied. In the following example my active folder is CMakeProject-1.

Status bar in Visual Studio Code shows the active folder in the left-hand corner, before the active debug target.

By default, the active folder will change based on your file context. Viewing or editing a file in CMakeProject-1 will cause CMakeProject-1 to be the active folder, while viewing or editing a file in CMakeProject-2 will cause CMakeProject-2 to be the active folder. You can temporarily override the active folder by selecting the active folder in the status bar or running the CMake: Select Active Folder command.

The CMake: Select Active Folder command prompts you to select the active CMake folder.

You can also disable this behavior by setting the user-level or workspace-level setting CMake: Auto Select Active Folder to false. To open your workspace settings, use the command “Workspaces: Open Workspace Configuration File”. If cmake.autoSelectActiveFolder is set to false then your active folder will only change if you manually run the CMake: Select Active Folder command.

Finally, the CMake Tools extension has also added new commands like CMake: Configure All Projects and CMake: Build All Projects to apply existing CMake commands to all the folders in your workspace. These commands are only available when you have more than one folder open in your workspace.

'All' CMake commands apply to all folders in the workspace, not just the active folder.

Commands to configure, build, clean, rebuild and reconfigure all projects are also available from the CMake: Project Outline view.

File-based API

Thank you to @KoeMai for submitting this PR!

CMake version 3.14 introduced file-based API, which is a new way for clients (like the CMake Tools extension) to get semantic information about the underlying build system generated by CMake. It allows the client to write query files prior to build system generation. During build system generation CMake will read those query files and write object model response files for the client to read. Previously the CMake Tools extension only supported cmake-server mode, which was deprecated with CMake version 3.15. File-based API provides a faster and more streamlined way for the extension to populate the editor with information specific to your project structure because it is reading response files instead of running CMake in long-running server mode.

The latest release of the CMake Tools extension supports file-based API. The setting CMake: CMake Communication Mode has been added with the following possible values. The default value is automatic.

  • automatic: uses file-api if CMake version is >= 3.14 and falls back to cmake-server if CMake version is < 3.14
  • fileApi
  • serverApi
  • legacy: use only with old CMake versions <= 3.7. Functionality will be reduced

Feedback is welcome

Download the CMake Tools extension for Visual Studio Code today and give it a try. If you run into issues or have suggestions for the team, please report them in the issues section of the extension’s GitHub repository. You can also reach the team via email (visualcpp@microsoft.com) and Twitter (@VisualC).

The post Visual Studio Code CMake Tools Extension: Multi-root workspaces and file-based API appeared first on C++ Team Blog.

Garbage Collection at Food Courts

$
0
0

When I first started working on the GC, my predecessor was explaining the GC tuning to me. I told him that I thought it sounded like how I saw janitors work at food courts (I frequented food courts at the time 😀). And he concurred.

What I said was if you observe at a food court, in order to be productive, the janitor tries to collect a sizable amount of dirty dishes when they come out to collect, which means they collect more often when it’s busy to not risk running out of clean dishes, and less often when it’s not busy because there are very few dishes to be collected. This is the same way that the GC tunes. The dirty dishes are like the space occupied by dead objects – it was used (ie, dirtied) and can now be reclaimed. The clean dishes are like the cleared space GC hands out for people to use for new objects, ie, allocations. If GC does a collection and didn’t find much dead space, meaning high survival rate, it will wait till more cleared space is used, or in other words, more allocations are made before it does the next collection.

During meal times, food courts are busy so the clean dishes are consumed more quickly, and the janitor adjusts to that. This is the same as GC adjusts the amount of allocations before the next collection. It’s called the allocation budget. And this “adjusting” that GC does is part of its dynamic tuning. As the process runs, GC will modify this allocation budget according to the survival rate it observes to keep itself productive.

Now, the janitor needs a way to recognize if a dish is still in use – it would be quite rude for the janitor to take your dish away if you are still in the middle of using it (if they do that a lot I imagine they’d be fired…). If you are still eating, that’s pretty obvious. That’s the most common scenario. If say you need to leave temporarily (like if you suddenly remembered you needed to get a slice of cake before that dessert place closes…), you might put your jacket on the chair to indicate that you are still here. GC also needs to know if an object is still in use. The most common way is just by one object having a reference to another and that’s obvious to the GC. And there also exist other less common ways that require more effort to indicate to the GC that an object is in use, for example, the GC handles.

Over the years I’ve used this food court analogy to explain GC tuning to others and it seems to be well received, even by folks who don’t work in tech fields at all. Once I was at a class about communication and each person was required to give a 1 minute speech to explain their work. Many people in that class worked in finance and sales. I used the foot court analogy and years later I still got people from the class who enthusiastically told me that they understood how a garbage collector works 😄 thanks to the analogy, and still remembered it to that day.

The runtime team used to have what we called the “CLR Foundations” series where someone explains to the rest of the team the area they worked on. So when I explained the GC I used the food court analogy. Some of my coworkers told me they really enjoyed it. Of course as creative as they were, they started suggesting other things at a food court that could be used. I remember one suggested that we could use how people are seated to illustrate a compacting collector, as in, if there aren’t that many people, they could be sitting sparsely at tables. But if they are running out of tables, people can sit closely together to free up space to accommodate more.

A while ago the cafeteria at work (for the building I was in) started to implement this new “zero waste” thing which meant we no longer throw away garbage or put dirty dishes away ourselves. Instead a cafeteria employee acts as the garbage collector. So it’s like at a food court. And this garbage collector had a very aggressive collection policy. Sometime when people just finished eating and are still chatting at the table, they would already start a collection which meant each person would need to pass them their used dishes 😛 Knowing this I was very mindful with my food but one time I was still not quick enough – after I put my food for a few seconds they already started collecting. I sent the cafeteria manager an email to suggested a way to indicate “in use”. As always she’s extremely nice; and apparently already had a way to indicate this and just needed to remind some of her staff of it. She also said “I will also make sure they understand the importance of waiting before clearing a table after the initial lunch rush”. And I thought, “hey, that’s dynamic tuning right there!”.

The post Garbage Collection at Food Courts appeared first on .NET Blog.

Accessibility Improvements in Visual Studio 2019 for Mac

$
0
0

Demonstration of changes to Visual Studio for Mac when High Contrast Mode is enabled.

The release of Visual Studio 2019 for Mac version 8.4.4 includes numerous developments in the color representation of icons, and to warning and error status messages. The new appearance is easy to spot, and the new color palette is highly noticeable. Let me explain why these changes were necessary, and what exactly was changed.

Currently, more than 1 billion people experience some form of disability. There are various types of obstacles people must live with – mobility, cognitive, neural, speech, and hearing. But let’s talk about our visual accessibility enhancements, and what you can experience in the most recent versions of Visual Studio for Mac.

The World Health Organization calculated that approximately 200 million people currently live with some form of vision impairment. Our goal is for Visual Studio for Mac to be accessible to everyone. We must make sure that we deliver the best user interface experience to every user, whether they are visually impaired or not. There are many visual accessibility issues users may suffer from: low vision, color or total blindness, cataracts. Even such a common thing as sun glare could be a problem when using an application UI. One of the methods to empower visually impaired users to interact with applications more effectively is through color accessibility.

One of the fundamental ways for us to deliver an accessible UI is to boost the contrast ratio threshold of all interactive content – primarily text and icons. On a Mac, the background-to-text contrast ratio must be at least 3:1, and at least 4.5:1 in High Contrast mode. We’ll talk more about later in the blog post.

Another essential requirement here is that we shouldn’t display information differences with just a color shift, such as a status change between an inactive and active icon. Similarly, no information should rely only on color to show its severity. That means elements such as error or warning messages should not use only the background color to communicate their status. We need something more: for example, a highly visible error or warning symbol. In older versions of Visual Studio for Mac, there were numerous instances where we showed a status difference using just a color. Now, we use a more distinctive rendering of activated, disabled and stopped icons, not relying solely only on color. We’ve eliminated those sorts of situations in the interest of greater visual clarity.

High Contrast Mode

On a Mac, you can toggle the setting for High Contrast Mode by visiting Accessibility Preferences in System Preferences and clicking the Increase Contrast checkbox:

macOS accessibility preferences window

High Contrast Mode increases the color contrast of the whole system’s UI. Controls begin to use strokes and more easily visible shapes and labels. The colors are adjusted to appear more vibrant, and the difference in brightness between the foreground and the background is much more noticeable.

Unfortunately, not all applications on our desktops support High Contrast Mode. Native macOS controls provide High Contrast rendering for free, but it’s up to developers to update their custom controls. Some parts of the Visual Studio for Mac shell are heavily customized, so we still have some way to go.

Of course, using new colors and icons isn’t the only way to improve accessibility. We also wanted to ensure we enhance the experience for users of screen readers and to make sure keyboard shortcuts are available everywhere. We have many more improvements we’ll talk about, and others we’ll introduce soon. For now, we’ll focus on the new color palette and improved icon set, new features that are currently visible to every Visual Studio for Mac user.

New Color Palette

Our old Visual Studio for Mac color palette, which was created many years ago, used contrast ratios that were too low, especially in the light IDE theme. Hence, it was finally time for us to update on this front. You can see a comparison between our old and new palettes below, with contrast ratios between the background and foreground.

Comparisons between contrast in older and new icon sets

The old palette had two variants: one for the light and one for the dark IDE theme. As you can see above, the old palette suffered from many problems, especially the color contrast ratio of light-theme warning icons, which was less than ideal. Yellow on white or light gray is extremely difficult for anyone to see.

The new palette fixes all these issues and is also simpler, with better semantic meaning of the color groups. Plus, it’s ready for High Contrast Mode.

Improved Icons

We always had tons of icons in Visual Studio for Mac. By the time we released the changes detailed in this post, there were 1142 icons. Most of the icons came in four flavors: two for light and dark theme, and two for selected states (usually white-only glyph, displayed on the top of the system-wide accent color). We had all these a second time because we needed icons available for standard and high DPI (@2x) resolutions.

Now, we have twice as many icons, and it was a gigantic job. Every one had to be checked for accessibility issues as described above, converted to the new palette, duplicated, and repainted using the new High Contrast palette. That means we’re not just introducing new High Contrast icons; we’re also improving all our already existing ones. At this very moment Visual Studio for Mac uses 13704 icon files.

Some icons needed to be redrawn or adjusted, as they relied solely on color to show differences, such as the difference between normal and active states:
Changes in shape and color between old and new icons.

New Warning and Error Colors

We also took this opportunity to change colors of warning- and error-related messages Visual Studio for Mac shows. You’ll notice this most with light theme warning text, which was previously brighter than ideal and potentially challenging to read.

Images showing increased contrast of information in Visual Studio for Mac status bar

We now have new colors for error popovers, with a better appearance in general and when in High Contrast Mode:

Examples of new coloring in Visual Studio for Mac popover menus.

Helpful to Everybody

The changes described above aim to make the UI of Visual Studio for Mac easier for all developers to use. Now we have not only highly readable icons for visually-impaired users, but our already established standard icon set sports new, more prominent contrast in colors as well – helping the users with no accessibility demands at all. In any case, we still have much work ahead of us, but we’re getting better every day.

If you have feedback on these changes, please let us know by reaching out to us in the comments below. You can also reach out to us on Twitter at @VisualStudioMac. If you run into any issues using Visual Studio for Mac, you can use Report a Problem to notify us. We also welcome your feature suggestions on the Visual Studio Developer Community website.

The post Accessibility Improvements in Visual Studio 2019 for Mac appeared first on Visual Studio Blog.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>