Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Microsoft IoT Hackathon accelerates solutions across industries

$
0
0

Hardware setup from IoT hackathon

Houston, Texas, has become a hub for digital innovation, making it the ideal location for the Microsoft IoT Hackathon that took place May 13-15. Fed by a competitive university system, Houston’s growing engineering and IT talent base is leading disruption in the manufacturing, energy and life sciences sectors. The City of Houston is also partnering with Microsoft to realize a smart city agenda with the goals of improving the effectiveness of city employees, streamlining transportation and better connecting citizens to local services, especially during emergencies.

Houston’s journey to work with Microsoft and its partners in leveraging the cloud for digital transformation and building repeatable Internet of Things (IoT) solutions was brought to life during the recent IoT in Action event on April 16, which featured Mayor Sylvester Turner as a keynote speaker. The success of this event with 700+ attendees was not the only reason Microsoft chose to host a hackathon in the City of Houston. We also saw it as an opportunity to provide startups and developers with the tools and resources they need to innovate and bring their IoT concepts to life—all in a city that is committed to its own IoT journey.

Read on as I share some of the topics explored during the hackathon, as well as examples of the IoT solutions being developed and built on Azure IoT and Windows 10 IoT.

Hacking for a more connected future

Our core focus for the hackathon was to help attendees explore how they can use these IoT capabilities and create solutions that enable a more seamless experience between the physical and digital worlds. We were joined by 15 attendees from 13 different companies within the broader Houston area. All came with a concept in mind and looked to explore Microsoft IoT solutions for industries like farming, warehousing, interior horticulture, manufacturing, energy, transportation, public safety, smart cities and traffic logistics.

Zan Gligorov from OrgPal was one of the attendees. OrgPal is a Houston-based smart automation and telemetry company focused on specialized IoT hardware and software solutions. Designed for use in the energy and smart city industries, OrgPal’s solutions focus on capturing field service and customer data for storage, analysis and management in the cloud or on premises. It also provides edge and end point hardware, along with the information gateway infrastructure that brings the data to your fingertips (on desktop, server and mobile devices). During the hackathon, Zan was able to find opportunities to include new data sources for OrgPal’s hardware offering and new predictive maintenance results to include on the telemetry solution, including environmental issues, equipment abuse and other typical issues. The solution explored in the hackathon uses Windows 10 IoT Core and connects with Azure IoT services.

Zan Gligorov from OrgPal at hackathon in Station Houston hub for innovation and entrepreneurship

Zan Gligorov from OrgPal at hackathon in Station Houston hub for innovation and entrepreneurship.

Simplifying IoT development with Windows 10 and Azure IoT

IoT is a core strategy for driving better business outcomes, improving safety and addressing social issues. Yet for those just getting started, building and deploying IoT solutions can be expensive and time-consuming. Through Microsoft IoT Hackathons, our goal is to make it easier to quickly build secure, smart devices that leverage the intelligent cloud and harness the power of the intelligent edge.

Hackathon attendees can build and develop intelligent edge devices based on the Windows IoT family of operating systems, including:

  • Windows 10 IoT Core – helps manufacturers get to market quickly with small-footprint devices that are secure, lower cost and built for the intelligent edge. Windows IoT Core provides a royalty-free OS for prototyping, developing and testing IoT devices.
  • Windows 10 IoT Core Services – ensures long-term OS support and services for managing device updates and device health. Benefits include reduced operating costs with over-the-air updates that device manufacturers control for OS, apps and drivers—plus 10 years of OS security updates.
  • Windows 10 IoT Enterprise – provides a binary-identical, locked-down version of Windows 10 Enterprise that delivers enterprise manageability and security to a broad range of IoT solutions across multiple industries. It shares all the benefits of the worldwide Windows ecosystem, including the same familiar application compatibility, development and management tools as client PCs and laptops.
  • Windows Server IoT 2019 – securely handles the largest edge-computing workloads. Announced just this past February, Windows Server IoT 2019 brings the power of high-availability and high-performance storage and networking to the edge, addressing latency and connectivity requirements as well as enabling customers to maintain data on premises while securely storing and analyzing large amounts of data.

Those at the hackathon explored a variety of concept ideas using these technologies. For example, one attendee focused on enabling a better smart cities transportation solution. He used Windows Server IoT 2019 to quickly provide analysis and decision-making based on the data gathered from sensors attached to devices running Windows 10 IoT Core. Others explored connecting various devices running either Windows 10 IoT Core or Windows 10 IoT Enterprise to Azure IoT services with the goal of providing solutions in manufacturing, agriculture and energy.

Additionally, attendees received hands-on experience for connecting their devices into Azure IoT Hub, Azure IoT Central, Azure Time Series Insights and numerous other Azure IoT capabilities.

Pratima Godse shares her progress with an event staffer at the Microsoft IoT Hackathon in Houston, Texas

Pratima Godse, architect at Daikin – which develops environmental systems for home, commercial and industrial applications – shares her progress with an event staffer at the Microsoft IoT Hackathon in Houston, Texas.

Fostering continued innovation in Houston

With the hackathon taking place in Station Houston – the city’s hub for innovation and entrepreneurship – we had the ideal setting to help participants explore new ideas or see how they could improve and harden proposed solutions as they move closer to commercialization. And all benefited from the detailed, hands-on technical training that established a common knowledge base from which the group worked.

Microsoft has been a key part of the City of Houston’s Smart City initiative and continues to invest through events like the hackathon to fuel additional innovation. As part of this investment, Microsoft is partnering with Intel to create the Ion Smart Cities Accelerator. The soon-to-be-converted 270,000-square-foot space in the emerging Midtown Innovation District will host pilot programs for companies developing Smart City technology. Currently, Station Houston is hosting the accelerator program.

Join us next time!

The Microsoft IoT Hackathons are an ideal opportunity to network with peers, demonstrate expertise, share best practices and insights, talk to subject matter experts and expand your skills. To join the fun, think about the concepts you want to work on and watch this space for announcements as we release future dates in coming months.

In the meantime, check out our global IoT in Action event series for learning opportunities coming to a city near you. And be sure to watch the on-demand webinar, Windows IoT: Business Transformation, to discover how Windows 10 IoT can help you get up and running quickly.

The post Microsoft IoT Hackathon accelerates solutions across industries appeared first on Windows Developer Blog.


Extension Spotlight – 7pace Timetracker

$
0
0

The Azure DevOps Marketplace keeps on growing, with around 1,000 extensions used every day by our customers. While it’s easy to find and search for something you know you need to solve a problem (like deploying to AWS or integrating with ServiceNow), we thought it would be good to blog about some of the extensions that add valuable additional functionality to Azure DevOps.

One highly rated extension created for Azure DevOps is 7pace Timetracker. It was also one of the earliest in the marketplace and continues to be improved. This extension is deeply integrated into Azure Boards and allows your team to easily record the time associated with a task right at their fingertips while they are working on the task. The data can then be used to better estimate efforts by learning from real work, generate work reports, manage budgets, meet capitalization requirements and more. Timetracker also provides a neat API to access data which can help you develop custom reports and tooling.

7pace Timetracker

“If you need to know where time and effort is really spent in Azure DevOps projects, Timetracker is a fantastic tool to help you do so.”

Jamie Cool – Director of Product Management, Azure DevOps

Recently, the Timetracker team have updated the extension to include the ability for managers to approve timesheets and they have some new reporting dashboards slated to be released in their next major version.

If you need to track how time is spent on your projects, then using 7pace Timetracker takes away a lot of the pain and ‘end-of-week’ guess work. There is a 28-day free trial when you install it from the marketplace and you can head to their website (7pace.com) to learn about their subscription options.

The post Extension Spotlight – 7pace Timetracker appeared first on Azure DevOps Blog.

HB-series Azure Virtual Machines achieve cloud supercomputing milestone

$
0
0

New HPC-targeted cloud virtual machines are first to scale to 10,000 cores

Azure Virtual Machine HB-series are the first on the public cloud to scale a MPI-based high performance computing (HPC) job to 10,000 cores. This level of scaling has long been considered the realm of only the world’s most powerful and exclusive supercomputers, but now is available to anyone using Azure.

HB-series virtual machines (VMs) are  optimized for HPC applications requiring high memory bandwidth. For this class of workload, HB-series VMs are the most performant, scalable, and price-performant ever launched on Azure or elsewhere on the public cloud.

With the AMD EPYC processors, the HB-series delivers more than 260 GBs of memory bandwidth, 128 MB L3 cache, and SR-IOV-based 100 Gbs InfiniBand. At scale, a customer can utilize up to 18,000 physical CPU cores and more than 67 terabytes of memory for a single distributed memory computational workload.

For memory-bandwidth bound workloads, the HB-series delivers something many in HPC thought may never happen. Azure-based VMs are now as or more capable as bare-metal, on-premises status quo that dominates the HPC market, and at a highly competitive price point.

World-class HPC technology

HB-series VMs feature the cloud’s first deployment of AMD EPYC 7000-series CPUs explicitly for HPC customers. AMD EPYC features 33 percent more memory bandwidth than any x86 alternative, and even more than leading POWER and ARM server platforms. In context, this 263 GBs of memory bandwidth the HB-series VM delivers 80 percent more than competing cloud offerings in the same memory per core class.

HB-series VMs expose 60 non-hyperthreaded CPU cores and 240 GB of RAM, with a baseclock of 2.0 GHz, and an all-cores boost speed of 2.55 GHz. HB VMs also feature a 700 GB local NVMe SSD, and support up to four Managed Disks including the new Azure P60/P70/P80 Premium Disks.

A flagship feature of HB-series VMs is 100 GBs InfiniBand from Mellanox. HB-series VMs expose the Mellanox ConnectX-5 dedicated back-end NIC via SR-IOV, meaning customers can use the same OFED driver stack that they’re accustomed to in a bare metal context. HB-series VMs deliver MPI latencies as low as 2.1 microseconds, with consistency, bandwidth, and message rates in line with bare-metal InfiniBand deployments.

Cloud HPC scaling achievement

As part of early acceptance testing, the Azure HPC team benchmarked many widely used HPC applications. One common class of applications are those that simulate computational fluid dynamics (CFD). To see how far HB-series VMs could scale, we selected the Le Mans 100 million cell model available to Star-CCM+ customers, with results as follows:

Graph of Siemens Star-CCM+ V.14.02 Le Mans 100M couple scaling - Speed up vs nodes

 

Graph of Siemens Star-CCM+ V.14.02 Le Mans 100M couple scaling - parallel efficiency vs nodes

Table showing number of hosts, cores, PPN, sample elapsed time, speed up node, and parallel efficiency

Table showing number of hosts, cores, PPN, sample elapsed time, speed up node, and parallel efficiency

The Le Mans 100 million cell model scaled to 256 VMs across multiple configurations accounting for as many as 11,520 CPU cores. Our testing revealed that maximum scaling efficiency could be had with two MPI ranks per NUMA domain yielding a top-end scaling efficiency of 71.3 percent efficiency. For top-end performance, three MPI ranks per NUMA domain yielded the fastest overall performance. Customers can choose which metric they find most valuable based on a wide variety of factors.

Delighting HPC customers on Azure

The unique capabilities and cost-performance of HB-series VMs are a big win for scientists and engineers who depend on high-performance computing to drive their research and productivity to new heights. Organizations spanning aerospace, automotive, defense, financial services, heavy equipment, manufacturing, oil & gas, public sector academic, and government research have shared feedback on how the HB-series has increased product performance and provided new insights through detailed simulation models.

Rescale partners with Azure to provide HPC resources for computationally complex simulations and analytics. Launching today, Azure Virtual Machine HB-series VM can be consumed through Rescale’s ScaleX® as the new “Amber” compute resource.

“As the only fully managed HPC cloud service in the market, Rescale creates an elegant way to move on-premises HPC workloads to the cloud. We have been waiting with great anticipation for Microsoft to introduce cloud building blocks specifically engineered for HPC," said Adam McKenzie, CTO of Rescale. "Now, new HB-series VMs on Azure enable MPI workloads to scale to tens of thousands of cores with the kind of cost-performance that rivals on-premises supercomputers”

Available now

Azure Virtual Machine HB-series are currently available in South Central US and Western Europe, with additional regions rolling out soon.

Azure.Source – Volume 84

$
0
0

Now available

All U.S. Azure regions now approved for FedRAMP High impact level

We’re now sharing our ability to provide Azure public services that meet U.S. Federal Risk and Authorization Management Program (FedRAMP) High impact level and extend FedRAMP High Provisional Authorization to Operate (P-ATO) to all of our Azure public regions in the United States. Achieving FedRAMP High means that both Azure public and Azure Government data centers and services meet the demanding requirements of FedRAMP High, making it easier for more federal agencies to benefit from the cost savings and rigorous security of the Microsoft Commercial Cloud.

Now in preview

Drive higher utilization of Azure HDInsight clusters with autoscale

We are excited to share the preview of the Autoscale feature for Azure HDInsight. This feature enables enterprises to become more productive and cost-efficient by automatically scaling clusters up or down based on the load or a customized schedule.

Announcing the preview of Windows Server containers support in Azure Kubernetes Service

Kubernetes is taking the app development world by storm. Earlier this month, we shared that the Azure Kubernetes Service (AKS) was the fastest growing compute service in Azure’s history. Today, we’re excited to announce the preview of Windows Server containers in Azure Kubernetes Service (AKS) for the latest versions, 1.13.5 and 1.14.0.  With this, Windows Server containers can now be deployed and orchestrated in AKS enabling new paths to migrate and modernize Windows Server applications in Azure.

Optimize price-performance with compute auto-scaling in Azure SQL Database serverless

Optimizing compute resource allocation to achieve performance goals while controlling costs can be a challenging balance to strike – especially for database workloads with complex usage patterns. To help address these challenges, we are pleased to announce the preview of Azure SQL Database serverless. SQL Database serverless (preview) is a new compute tier that optimizes price-performance and simplifies performance management for databases with intermittent and unpredictable usage. Line-of-business applications, dev/test databases, content management, and e-commerce systems are just some examples across a range of applications that often fit the usage pattern ideal for SQL Database serverless.

Two people in an office setting standing together looking at a laptop with the text Visual interface for Azure Machine Learning service

Visual interface for Azure Machine Learning service

During Microsoft Build we announced the preview of the visual interface for Azure Machine Learning service. This new drag-and-drop workflow capability in Azure Machine Learning service simplifies the process of building, testing, and deploying machine learning models for customers who prefer a visual experience to a coding experience. This capability brings the familiarity of what we already provide in our popular Azure Machine Learning Studio with significant improvements to ease the user experience.

Technical content

Kubernetes - from the beginning, Part I, Basics, Deployment and Minikube

Kubernetes is a BIG topic. In this blog series, Chris Noring tackles the basics. Part 1 covers: Why Kubernetes and orchestration in general; talking through Minikube, simple deploy example; cluster and basic commands; deploying an app; and concepts and troubleshooting of pods and nodes.

Introduction to AzureKusto

This post is to announce the availability of AzureKusto, the R interface to Azure Data Explorer (internally codenamed “Kusto”), a fast, fully managed data analytics service from Microsoft. It is available from CRAN, or you can install the development version from GitHub.

Microsoft Azure for spoiled people

In this article, the author walks you through the easiest possible way to set up a Vue.js CLI-built web app on Azure with continuous integration via GitHub.

Azure IoT Central and MXChip Hands-on Lab

This hands-on lab covers creating an Azure IoT Central application, connecting an MXChip IoT DevKit device to your Azure IoT Central application, and setting up a device template.

Azure shows

A new way to try .NET

Learning a programming language is becoming a fundamental aspect of education across the world. We're always looking for new and interesting ways to teach programming to learners at all levels. From Microsoft Build 2019,  we had Maria Naggaga come on to show us the Try .NET project. She shows us how this simple tool will allow us to create interactive documentation, workshops, and other interesting learning experiences.

Xamarin.Forms 101: Dynamic resources

Let's take a step back in a new mini-series that I like to call Xamarin.Forms 101. In each episode we will walk through a basic building block of Xamarin.Forms to help you build awesome cross-platform iOS, Android, and Windows applications in .NET. This week we will look at how to use dynamic resources to change the value of a resource while the application is running.

Cosmos DB data in a smart contract with Logic Apps

We show an IoT use case that highlights how to leverage the power of Cosmos DB to manipulate IoT data within a Cosmos DB and use that data in smart contracts via the Ethereum Logic App connector.

ARM templates and Azure policy

Cynthia talks with Satya Vel on the latest ARM template updates including an enhanced template export experience, best practices for ARM clients, and new capabilities that are now available on ARM templates.

Industries and partners

Infographic of Azure IoT technology offerings and functions

Securing the pharmaceutical supply chain with Azure IoT

You’re responsible for overseeing the transportation of a pallet of medicine halfway around the world. Drugs will travel from your pharmaceutical company’s manufacturing outbound warehouse in central New Jersey to third-party logistics firms, distributors, pharmacies, and ultimately, patients. Each box in that pallet – no bigger than the box that holds the business cards on your desk – contains very costly medicine, the product of 10 years of research and R&D spending. But there are several big catches. Read on to see what they are and how Azure IoT helps overcome them.

How you can use IoT to power Industry 4.0 innovation

IoT is ushering in an exciting—and sometimes exasperating—time of innovation. Adoption isn’t easy, so it’s important to hold a vision of the promise of Industry 4.0 in mind as you get ready for this next wave of business. This post is the fourth in a four-part series designed to help companies maximize their ROI on IoT.

Manage your cross cloud spend using Azure Cost Management

Azure NetApp Files is now generally available

CMake 3.14 and Performance Improvements

$
0
0

In Visual Studio 2019 version 16.1 Preview 2 we have updated the version of CMake we ship inbox to CMake 3.14. This comes with performance improvements for extracting generated build system information. Additionally, we now support virtually all the Visual Studio capabilities regardless of the CMake binary origin so long as the CMake version is at least 3.14. The main reason for this is the introduction of the file-based API, which we now support, and which provides a new way for retrieving semantic information. This is now the recommended way to connect an IDE to CMake, the old CMake server being deprecated, and we are an early adopter of the feature.

Visual Studio Performance Improvements

The indexing is now significantly faster for code opened via Open folder, and as a result IntelliSense is available considerably faster than in Visual Studio 2017. As an example, in the LLVM codebase, IntelliSense becomes available at least 2 times faster in Visual Studio 2019. Additionally, a new indexing algorithm lights up IntelliSense incrementally while the folder is being indexed.

In Visual Studio 2017 on average it takes 3 min from the point of opening the LLVM folder, to the point where you have IntelliSense, including generation. In Visual Studio 2019 it takes 1:26 min, including generation.

CMake 3.14

We now ship CMake 3.14 in-box with Visual Studio. This contains the new file-based API, and support for the Visual Studio 2019 generators. To see the full set of changes, please see the CMake 3.14 release notes.

Visual Studio 2019 Generators

CMake generator selection box showing Visual Studio 16 2019

CMake 3.14 introduces support for the Visual Studio 2019 generators. The new generator is called “Visual Studio 16 2019”, and the platform targeting is simplified. To use a specific platform, use the -A argument. For example, to use the Visual Studio 2019 generator targeting the x64 platform: cmake.exe -G “Visual Studio 16 2019” -A x64

File-based API

The file-based API allows a client to write query files prior to build system generation. During build system generation CMake will read those query files and write object model response files. Prior to this API’s introduction we were using the cmake-server to get the equivalent information. We‘re still supporting the old model, but starting with 3.14 we can now support the new model as well. One of the differences in our CMake fork on github was the backtrace information needed for our Targets View feature inside Visual Studio. Prior to CMake 3.14 we needed the CMake version from our fork, in order for Targets View to work properly. Now, with the file-based API this is no longer required.

The file-based API provides a simpler, standard path to the future, with official support in CMake itself. We expect most users to see either performance improvements or not notice any degradation of performance. Extracting the information to populate the Visual Studio UI is faster because we are just reading the response files rather than running CMake in long-running server mode, there is less memory usage and less overhead associated with creating and maintaining processes.

These value-added features light up automatically when you update to Visual Studio 2019 version 16.1 Preview 2.

Send Us Feedback

Your feedback is a critical part of ensuring that we can deliver the best CMake experience.  We would love to know how Visual Studio 2019 version 16.1 Preview 2 is working for you. If you have any questions specific to CMake Tools, please reach out to cmake@microsoft.com or leave a comment. If you find any issues or have a suggestion, the best way to reach out to us is to Report a Problem.

The post CMake 3.14 and Performance Improvements appeared first on C++ Team Blog.

Porting desktop apps to .NET Core

$
0
0

Since I’ve been working with the community on porting desktop applications from .NET Framework to .NET Core, I’ve noticed that there are two camps of folks: some want a very simple and short list of instructions to get their apps ported to .NET Core while others prefer a more principled approach with more background information. Instead of writing up a “Swiss Army knife”-document, we are going to publish two blog posts, one for each camp:

  • This post is the simple case. It’s focused on simple instructions and smaller applications and is the easiest way to move your app to .NET Core.
  • We will publish another post for more complicated cases. This post will focus more on non-trivial applications, such WPF application with dependencies on WCF and third-party UI packages.

If you prefer watching videos instead of reading, here is the video where I do everything that is described below.



Step 0 – Prerequisites

To port your desktop apps to Core, you’ll need .NET Core 3 and Visual Studio 2019.

Step 1 – Run portability analyzer

Before porting, you should check how compatible your application is with .NET Core. To do so, download and run .NET Portability Analyzer.

  • On the first tab, Portability Summary, if you have only 100% in .NET Core column (everything is highlighted in green), your code is fully compatible, go to Step 2.
  • If you have values of less than 100%, first look at all assemblies that aren’t part of you application. For those, check if their authors are providing versions for .NET Core or .NET Standard.
  • Now look at the other part of assemblies that are coming from your code. If you don’t have any of your assemblies listed in the portability report, go to Step 2. If you do, open Details tab, filter the table by clicking on the column Assembly and only focus on the ones that are from your application. Walk the list and refactor your code to stop using the API or replace the API usage with alternatives from .NET Core.

Step 2 – Migrate to SDK-style .csproj

In Solution Explorer right-click on your project (not on the solution!). Do you see Edit Project File? If you do, you already use the SDK-style project file, so you should move to Step 3. If not, do the following.

  • Check in the Solution Explorer if your project contains a packages.config file. If you don’t, no action is needed, if you do, right-click on packages.config and choose Migrate packages.config to PackageReference. Then click OK.
  • Open your project file by right-clicking on the project and choose Unload Project. Then right-click on the project and choose Edit <your project name>.csproj.
  • Copy the content of the project file somewhere, for example into Notepad, so you can search in it later.
  • Delete everything from your project file opened in Visual Studio (I know it sounds aggressive 😊, but we will add only needed content from the copy we’ve just made in a few steps). Instead of the text you’ve just deleted, paste the following code.For a WinForms application:

    For a WPF application:
  • In Notepad, search for PackageReference. If you did not find anything, move on. If you found PackageReference, copy the entire <ItemGroup> that contains PackageReference in your project file, opened in Visual Studio, right below the lines you’ve pasted in the step above. Do it for each occurrence of the PackageReference you have found. The copied block should look like this.
  • Now do the same as above for ProjectReference. If you did not find anything, move on. If you found any ProjectReference items, they would look like this.
  • You can remove lines with <Project> and <Name> properties, since they are not needed in the new project file style. So for each ProjectReference that you have found (if any), copy only ItemGroup and ProjectReference like this.
  • Save everything. Close the .csproj file in Visual Studio. Right-click on your project in the Solution Explorer and select Reload Project. Rebuild and make sure there are no errors.

Great news, you just updated your project file to the new SDK-style! The project is still targeting .NET Framework, but now you’ll be able to retarget it to .NET Core.

Step 3 – Retarget to .NET Core

Open your project file by double-clicking on your project in Solution Explorer. Find the property <TargetFramework> and change the value to netcoreapp3.0. Now your project file should look like this:

Build and run your project. Congratulations, you ported to .NET Core 3!

Fixing errors

If you get errors like

The type or namespace <some name> could not be found

or

The name <some name> does not exist in the current context

and your portability report was green, it should be easy to fix by adding a NuGet package with the corresponding library. If you cannot find the NuGet package with the library that is missing, try referencing Microsoft.Windows.Compatibility. This package adds ~21K .NET APIs from .NET Framework.

Working with designers

Even though it is possible to edit the user interface of your application via code, developers usually prefer using the visual designers. With .NET Core we had to rearchitect the way the designers work with .NET Core projects:

  • The WPF designer is already in preview and we are working on adding more functionality to it.
  • The WinForms designer for .NET Core will be available later, and meanwhile there you can use the .NET Framework WinForms designer as a workaround.

Here is how you can use the .NET Framework WinForms designer:

  1. Copy your .csproj file (let’s say you have MyProject.csproj), give it a different name, for example MyProject.NetFramework.csproj and put it next to your existing project file.
  2. Make sure your project is closed in Visual Studio, open the new project MyProject.NetFramework.csproj.
    In Solution Explorer right-click on your project and select Properties. In the Application tab (should be open by default) set Assembly name and Default namespace to the same values as in your initial project (remove “.NetFramework” from the names).
    Save this solution next to your existing solution.
  3. Open the new project file and change the <TargetFramework> to net472.
  4. Now when you need to use the WinForms designer, load your project with the MyProject.NetFramework.csproj project file and you’ll get the full experience of .NET Framework designer. When you are done with the designer, close and open your project with the .NET Core project file.
  5. This is just a workaround until the WinForms designer for .NET Core is ready.

Why port to .NET Core

Check out the video where Scott Hunter and I are talking about all the new things coming with .NET Core 3 Porting to .NET Core 3.0.

The post Porting desktop apps to .NET Core appeared first on .NET Blog.


Piper Command Center BETA – Build a game controller from scratch with Arduino

$
0
0

Piper Command CenterBack in 2018 I posted my annual Christmas List of STEM Toys and the Piper Computer Kit 2 was on the list. My kids love this little wooden "laptop" comprised of a Raspberry Pi and an LCD screen. You spend time going through curated episodes of custom content and build and wire the computer LIVE while it's on!

The Piper folks saw my post and asked me to take a look at the BETA of their Piper Command Center, so my sons and I jumped at the chance. They are actively looking for feedback. It's a chance to build our own game controller!

The Piper Command Center BETA already has a ton of online content and things to try. Their "firmware" is an Arduino sketch and it's all up on GitHub. You'll want to get the Arduino IDE from the Windows Store.

Today the Command Center can look like a Keyboard or a Mouse.

  1. In Mouse Mode (default), the joystick controls cursor movement and the left and right buttons mimic left and right mouse clicks.
  2. In Keyboard Mode, the joystick mimics the arrow keys on a keyboard, and the buttons mimic Space Bar (Up), Z (Left), X (Down), and C (Right) keys on a keyboard.

Once it's built you can use the controller to play games in your browser, or soon, with new content on the Piper itself, which runs Minecraft usually. However, you DO NOT need the Piper to get the Piper Command Center. They are separate but complementary devices.

Assemble a real working game controller, understand the basics of an Arduino, and discover physical computing by configuring a joystick, buttons, and more. Ideal for ages 13+.

My son is looking at how he can modify the "firmware" on the Command Center to allow him to play emulators in the browser.

The parts ot the Piper Command Center Parts and Wires for the Piper Command Center

The Piper Command Center comes unassembled, of course, and you get to put it together with a cool blueprint instruction sheet. We had some fun with the wiring and a were off by one a few times, but they've got a troubleshooting video that helped us through it.

Blueprints for the Piper Command Center

It's a nice little bit of kit and I love that it's made of wood. I'd like to see one with a second joystick that could literally emulate an XInput control pad, although that might be more complex than just emulating a mouse or keyboard.

Go check it out. We're happy with it and we're looking forward to whatever direction it goes. The original Piper has updated itself many times in a few years we've had it, and we upgraded it to a 16gig SD Card to support the latest content and OS update.

Piper Command Center is in BETA and will be updated and actively developed as they explore this space and what they can do with the device. As of the time of this writing there were five sketches for this controller.


Sponsor: Manage GitHub Pull Requests right from the IDE with the latest JetBrains Rider. An integrated performance profiler on Windows comes to the rescue as well.



© 2018 Scott Hanselman. All rights reserved.
     

Python in Visual Studio Code – May 2019 Release

$
0
0

We are pleased to announce that the May 2019 release of the Python Extension for Visual Studio Code is now available. You can download the Python extension from the Marketplace, or install it directly from the extension gallery in Visual Studio Code. You can learn more about Python support in Visual Studio Code in the documentation.

In this release we made improvements that are listed in our changelog, closing a total of 42 issues including IntelliSense in the Python Interactive Window and additional improvements to the Python Language Server.

IntelliSense in the Python Interactive Window

The Python Interactive window, our built-in IPython Console, is now enhanced with full-fledged IntelliSense – code completion, member lists, quick info for methods, and parameter hints! Now, you can be just as productive typing in the Python Interactive window as you would in the code editor.

Additional improvements to the Python Language Server

This release includes bug fixes and enhancements to the “Find All References “ and ”Go to Definition” features, such as handling of relative imports. We also made continued improvements to the Python Language Server’s loading time, CPU and memory usage. We’re working hard on decreasing memory consumption, so if you run into problems, please provide more details on the Python Language Server GitHub page or directly on the “high memory usage” issue page.

As a reminder, to opt into the Language Server, change the python.jediEnabled setting to false in File > Preferences > User Settings. We are working towards making the language server the default in the next few releases.

In case you missed it: Remote Development

On May 2nd the Microsoft’s Python and Visual Studio Code teams announced the Remote Development extensions for Visual Studio Code, which allows developers to run, setup and develop their projects inside docker containers, remote SSH hosts and Windows Subsystem for Linux (WSL). It’s all running remotely: auto-completions, debugging, terminal, source control, additional extensions you install and more, but you get the same experience as if it was running locally.

Check out the Remote Python Development in Visual Studio Code blog post to learn more and get started!

Other Changes and Enhancements

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include:

  • Allow column sorting in variable explorer (#5281)
  • Use the correct activation script for conda environments (#4402)
  • Always show pytest’s output when it fails. (#5313)
  • Fix performance issues with long collections and variable explorer (#5511)
  • Update ptvsd to 4.2.10.

Be sure to download the Python extension for Visual Studio Code now to try out the above improvements. If you run into any problems, please file an issue on the Python VS Code GitHub page.

The post Python in Visual Studio Code – May 2019 Release appeared first on Python.

Azure IoT Edge Tools Extension (Preview) Announcement

$
0
0

We’re excited to announce the preview availability of the new Azure IoT Edge Tools Extension (Preview) for Visual Studio 2019. The extension provides a rich set of functionalities to support development of IoT Edge solutions with Visual Studio 2019:

  • New Azure IoT Edge project targeting different platforms (Linux amd64, Linux arm32v7, Windows amd64)
  • Add a new IoT Edge module (C#/C) to solution
  • Edit, build and debug IoT Edge modules locally on your Visual Studio machine
  • Build and push docker images of IoT Edge modules
  • Run IoT Edge modules in a local or remote simulator
  • Deploy IoT solutions to IoT Edge devices (with Cloud Explorer)

Prerequisites

  • Visual Studio 2019: “.NET desktop development” and “Azure development workload” workload installed; “Windows desktop development with C++” is needed if you plan to develop C modules
  • Docker Desktop. You need to properly set the Docker CE running in Linux container mode or Windows container mode.
  • To set up local development environment to debug, run, and test your IoT Edge solution, you need Azure IoT EdgeHub Dev Tool. Install Python (2.7/3.6), then install iotedgehubdev by running below command in your terminal. Make sure your Azure IoT EdgeHub Dev Tool version is greater than 0.8.0.
    pip install --upgrade iotedgehubdev

Installation

There are two options to install the new extension:

  • Download and install the new extension from the Visual Studio Marketplace.
  • Alternatively, you can install the extension directly from within Visual Studio 2019 using the menu Extensions -> Manage Extensions. In the Manage Extensions window, select Online from the left panel and input edge in the search box on the top-right to search and download “Azure IoT Edge Tools for VS 2019 [Preview]”.

How to use this extension?

Please refer the following tutorials to get started:
Use Visual Studio 2019 to develop and debug modules for Azure IoT Edge (Preview) 
Easily Develop and Debug Azure IoT Edge C Modules with Azure IoT Edge Tools
Visual Studio Azure IoT Edge Tools document repo

Please don’t hesitate to give it a try! Your feedback and suggestions are very important for us to keep improving and making it even easier to develop your IoT applications. Please share your thoughts with us by suggesting a feature or reporting an issue in our Visual Studio Azure IoT Edge Tools repo.

The post Azure IoT Edge Tools Extension (Preview) Announcement appeared first on The Visual Studio Blog.

Key causes of performance differences between SQL managed instance and SQL Server

$
0
0

Migrating to a Microsoft Azure SQL Database managed instance provides a host of operational and financial benefits you can only get from a fully managed and intelligent cloud database service. Some of these benefits come from features that optimize or improve overall database performance. After migration many of our customers are eager to compare workload performance with what they experienced with on-premises SQL Server, and sometimes they're surprised by the results. In many cases, you might get better results on the on-premises SQL Server database because a SQL Database managed instance introduces some overhead for manageability and high availability. In other cases, you might get better results on a SQL Database managed instance because the latest version of the database engine has improved query processing and optimization features compared to older versions of SQL Server.

This article will help you understand the underlying factors that can cause performance differences and the steps you can take to make fair comparisons between SQL Server and SQL Database.

If you're surprised by the comparison results, it's important to understand what factors could influence your workload and how to configure your test environments to ensure you have a fair comparison. Some of the top reasons why you might experience lower performance on a SQL Database managed instance compared to SQL Server are listed below. You can mitigate some of these by increasing and pre-allocating file sizes or adding cores; however, the others are prerequisites for guaranteed high availability and are part of the PaaS service.

Simple or bulk recovery model

The databases placed on the SQL Database managed instance are using a full database recovery model to provide high availability and guarantee no data loss. In this scenario, one of the most common reasons why you might get worse performance on a SQL Database managed instance is the fact that your source database uses a simple or bulk recovery model. The drawback of the full recovery model is that it generates more log data than the simple/bulk logged recovery model, meaning your DML transaction processing in the full recovery model will be slower.

You can use the following query to determine what recovery model is used on your databases:

select name, recovery_model_desc from sys.databases

If you want to compare the workload running on SQL Server and SQL Database managed instances, for a fair comparison make sure the databases on both sides are using the full recovery model.

Resource governance and HA configuration

SQL Database managed instance has built-in resource governance that ensures 99.99% availability, and guarantees that management operations such as automated backups will be completed even under high workloads. If you don’t use similar constraints on your SQL Server, the built-in resource governance on SQL Database managed instance might limit your workload.

For example, there's an instance log throughput limit (up to 22MBs on the general purpose and up to 48MBs on the business critical tier) that ensures you can't load more data than the instance can backup. In this case, you might see higher INSTANCE_LOG_GOVERNOR wait statistics that don’t exist in your SQL Server instance. These resource governance constraints might slow down operations such as bulk load or index rebuild because these operations require higher log rates.

In addition, the secondary replicas in business critical tier instances might slow down the primary database if they can't catch-up the changes and apply them, so you might see additional HADR_DATABASE_FLOW_CONTROL or HADR_THROTTLE_LOG_RATE_SEND_RECV wait statistics.

If you're comparing your SQL Server workload running on local SSD storage to the business critical tier, note that the business critical instance is an Always On availability group cluster with three secondary replicas. Make sure that your source SQL Server has an HA implementation similarly using Always On availability groups with at least one synchronous commit replica. If you're comparing the business critical tier with a single SQL Server instance writing to the local disk, this would be an unrealistic comparison due to the absence of HA on your source instance. If you are using async always on replicas, you would have HA with better performance, but in this case you are making the trade-off between the possibility of data-loss in favor of performance, and you will get the better results on the SQL Server instance.

Automated backup schedule

One of the main reasons why you would choose the SQL Database managed instance is the fact that it guarantees you will always have backups of your databases, even under heavy workloads. The databases in a SQL Database managed instance have scheduled full, incremental, and log backups. Full backups are taken every seven days, incremental every twelve hours, and log backups are taken every five to ten minutes. If you have multiple databases on the instance there's a high chance there is at least one backup currently running.

Since the backup operations are using some instance resources (CPU, disk, network), they can affect workload performance. Make sure the databases on the system that you compare with the managed instance have similar backup schedules. Otherwise, you might need to accept that you're getting better results on your SQL Server instance because you're making a trade-off between database recovery and performance, which is not possible on a SQL Database managed instance.

If you're seeing unexpected performance differences, check if there is some ongoing full/differential backup either on the SQL Database managed instance or SQL Server instance that can affect performance of the currently running workload, using the following query:

SELECT r.command, query = a.text, start_time, percent_complete,
      eta = dateadd(second,estimated_completion_time/1000, getdate())
FROM sys.dm_exec_requests r
    CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) a
 WHERE r.command IN ('BACKUP DATABASE','BACKUP LOG')

If you see currently running full or incremental backup during the short-running benchmark, you might pause your workload and resume it once the backup finishes.

Connection and App to Database proximity

The application accessing the databases and executing the benchmark queries on the SQL Database managed instance and SQL Server instance must be in a similar network proximity range in both cases. If you are placing your application and SQL Server database in the local environment (or running an app like HammerDB from the same machine where the SQL Server is installed) you will get better results on SQL Server compared to the SQL Database managed instance, which is placed on a distributed cloud environment with respect to the application. Make sure that in both cases you're running the benchmark application or query on separate virtual machines in the same region as SQL Database managed instance to get the valid results. If you're comparing an on-premises environment with the equivalent cloud environments, try to measure bandwidth and latency between the app and database and try to ensure they are similar.

SQL Database managed instance is accessed via proxy gateway nodes that accept the client requests and redirect them to the actual database engine nodes. In order to provide the results closer to your environment, enable ProxyOverride mode on your instance using Set-AzSqlInstance PowerShell command to enable direct access from the client to the nodes currently hosting your SQL Database managed instance.

In addition, due to compliance requirements, a SQL Database managed instance enforces SSL/TLS transport encryption which is always enabled. Encryption can introduce overhead in case of a large number of queries. If your on-premises environment does not enforce SSL encryption you will see additional network overhead in the SQL Database managed instance.

Transparent data encryption

The databases on SQL Database managed instance are encrypted by default using Transparent Data Encryption. Transparent Data Encryption encrypts/decrypts every page that is exchanged with the disk storage. This spends more CPU resources, and introduces additional latency in the process of fetching and saving the data pages to or from disk storage. Make sure that both databases on SQL Database managed instance and SQL Server have Transparent Data Encryption either turned on or off, and that database encryption/decryption operations have completed before starting performance testing.

You can use the following query to determine whether the databases are encrypted:

select name, is_encrypted from sys.databases

Another important factor that might affect your performance is encrypted TempDB. TempDB is encrypted if at least one database on your SQL Server or SQL Database managed instance is encrypted. As a result, you might compare two databases that are not encrypted, but due to some other SQL Database managed instance being encrypted (although it's not involved in the workload) the TempDB will also be encrypted. The unencrypted databases will still use encrypted TempDB and any query that creates temporary objects or uses spills would be slower. Note that TempDB will only get decrypted once all user databases on an instance are decrypted, and the instance restarts. Scaling a SQL Database managed instance to a new pricing tier and back is one way to restart it.

Database engine settings

Make sure the database engine setting such as database compatibility levels, trace flags, system configurations (‘cost threshold for parallelism’, ’max degree of parallelism’), database scoped configurations (LEGACY_CARDINALITY_ESTIMATOR, PARAMETER_SNIFFING, QUERY_OPTIMIZER_HOTFIXES, etc.), and database settings (AUTO_UPDATE_STATISTICS, DELAYED_DURABILITY) on the SQL Server and SQL Database managed instances are the same on both databases.

The following sample queries can help you to identify setting on SQL Server and Azure SQL Database managed instance:

select compatibility_level, snapshot_isolation_state_desc, is_read_committed_snapshot_on,

  is_auto_update_stats_on, is_auto_update_stats_async_on, delayed_durability_desc 
from sys.databases;
GO

select * from sys.database_scoped_configurations;
GO

dbcc tracestatus;
GO

select * from sys.configurations;

Compare the results of these queries on the SQL Database managed instance and SQL Server and try to align the differences if you identify some.

Note: The list of trace flags and configurations might be very long so we recommend filtering them or lookng only on the trace flags you've changed or know are affecting performance. Some of the trace flags are pre-configured on SQL Database managed instance as part of PaaS configurations and they are not affecting performance.

You might experiment with changing the compatibility level to a higher value, turning on the legacy cardinality estimator, or the automatic tuning feature on the SQL Database managed instance, which might give you better results than your SQL Server database.

Also note that SQL Database managed instance might provide better performance even if you align all parameters because it has the latest improvements, or fixes that are not bound to compatibility level, or some features, like forcing last good plan, that might improve your workload.

Hardware and environment specification

SQL Database managed instance runs on standardized hardware with pre-defined technical characteristics that are probably different than your environment. Some of the characteristics you might need to consider when comparing your environment with the environment where the SQL Database managed instance is running are:

  1. Number of cores should be the same both on SQL Server and the SQL Database managed instance. Note that a SQL Database managed instance uses 2.3-2.4 GHz processors, which might be different than your processor speed. It might consume more or less CPU for the same operation due to the CPU differences. If possible, check if hyperthreading is used on the SQL Server environment when comparing to the Gen4 and Gen5 hardware generations on a SQL Database managed instance. One on Gen4 hardware does not use hyperthreading, while on Gen5 it does. If you are comparing SQL Server running on a bare-metal machine with a SQL Database managed instance or SQL Server running on a virtual machine you'll probably get better results on a bare-metal instance.
  2. Amount of memory including memory/core ratio (5.1 GB/core on Gen5, 7 GB/core on Gen4). Higher memory/core ratio provides bigger buffer pool cache and increases cache hit ratio. If your workload does not perform well on a managed interface with the memory/core ratio 5, then you probably need to choose a virtual machine with the appropriate memory/core ratio instead of a SQL Database managed instance.
  3. IO characteristics – You need to be aware that performance of the storage system might be very different compared to your on-premises environment. A SQL Database managed instance is a cloud database and relies on Azure cloud infrastructure.
    • The general purpose tier uses remote Azure Premium disks where IO performance depends on the file sizes. If you reach the log limit that depends on the file size, you might notice WRITE_LOG waits and less IOPS in file statistics. This issue might occur on a SQL Database managed instance if the log files are small and not pre-allocated. You might need to increase the size of some files in the general purpose tier to get better performance (see this Tech Community article Storage performance best practices and considerations for Azure SQL Managed Instance General Purpose tier).
    • A SQL Database managed instance does not use instant file initialization, so you might see additional PREEMPTIVE_OS_WRITEFILEGATHER wait statistics since the date files are filled with zero bytes during file growth.
  4. Local or remote storage types – Make sure you're considering local SSD versus remote storage while doing the comparison. The general purpose tier uses remote storage (Azure Premium Storage) that can't match your on-premises environment if it uses local SSD or a high-performance SAN. In this case you would need to use the business critical tier as a target. The general purpose tier can be compared with other cloud databases like SQL Server on Azure Virtual Machines that also use remote storage (Azure Premium Storage). In addition, beware that remote storage used by a general purpose instance is still different than remote storage used by a SQL Virtual Machine because:
    • The general purpose tier uses a dedicated IO resource per each database file that depends on the size of the individual files, while SQL Server on Azure Virtual Machine uses shared IO resources for all files where IO characteristics depend on the size of the disk. If you have many small files, you will get better performance on a SQL Virtual Machine, while you can get better performance on a SQL Database managed instance if the usage of files can be parallelized because there are no noisy neighbors who are sharing the same IO resources.
    • SQL Virtual Machines use a read-caching mechanism that improves read speed.

If your hardware specs and resource allocation are different, you might expect different performance results that can be resolved only by changing the service tier or increasing file size. If you are comparing a SQL Database managed instance with SQL Server on Azure Virtual Machines, make sure that you are choosing a virtual machine series that has memory/cpu ratio similar to SQL Database managed instance, such as DS series.

Azure SQL Database managed instance provides a powerful set of tools that can help you troubleshoot and improve performance of your databases, in addition to built-in intelligence that could automatically resolve potential issues. Learn more about monitoring and tuning capabilities of Azure SQL Database managed instance in the following article: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-monitoring-tuning-index

Simplifying event-driven architectures with the latest updates to Event Grid

$
0
0

Event-driven architectures are increasingly replacing and outpacing less dynamic polling-based systems, bringing the benefits of serverless computing to IoT scenarios, data processing tasks or infrastructure automation jobs. As the natural evolution of microservices, companies all over the world are taking an event-driven approach to create new experiences in existing applications or bring those applications to the cloud, building more powerful and complex scenarios every day.

Today, we’re incredibly excited to announce a series of updates to Event Grid that will power higher performance and more advanced event-driven applications in the cloud:

  • Public preview of IoT Hub device telemetry events
  • Public preview of Service Bus as an event handler
  • Automatic server-side geo-disaster recovery
  • General availability of Event Domains, now with up to 100K topics per Domain
  • Public preview of 1MB event support
  • List search and pagination APIs
  • General availability of advanced filters with increased depth of filtering

Expanded integration with the Azure ecosystem

One of the biggest features we have been asked for since launching the Azure IoT Hub integration with Event Grid is device telemetry events. Today, we’re finally enabling that feature in public preview in all public regions except East US, West US, and West Europe. We are excited for you to try this capability and build more streamlined IoT solutions for your business.

Subscribing to device telemetry events allows you to easily integrate data from your devices into your solution more easily, including serverless applications using Azure Functions or Azure Logic Apps, and any other services by using webhooks, whether they are on Azure or not. This helps simplify IoT architectures by eliminating the need of additional services that poll for device telemetry for further processing.

By publishing device telemetry events to Event Grid, IoT Hub expands the services your data can reach, beyond the endpoints supported through message routing. For example, you can automate downstream workflows by creating different subscriptions to device telemetry events for different device types, identified by the device twin tag, and triggering distinct Azure Functions or third party applications for unique computation per device type. Based on your Event Grid subscriptions to device telemetry events, we create a default route in IoT hub, handling all your Event Grid subscriptions to device telemetry.

Learn more about IoT Hub device telemetry in docs, and continue to submit your suggestions through the Azure IoT User Voice forum.

We are also adding Service Bus as an event handler for Event Grid in public preview, so starting today you can route your events in Event Grid directly to Service Bus queues. Service Bus can now act as either an event source or event handler, making for a more robust experience delivering events and messages in distributed enterprise applications. It is currently in public preview and does not work with Service Bus topics and sessions, but it does work with all tiers of Service Bus queues.

This enables command and control scenarios in which you receive events of activity on other services such as blob created, device created, and job finished passing them along for further processing.

Learn more about Service Bus as a destination in docs.

Server-side geo disaster recovery

Event Grid now has built-in automatic geo disaster recovery (GeoDR) of metadata, applicable to all existing Domains, Topics and Event Subscriptions, not just for new ones. This provides a vastly improved resilience against service interruptions, all fully managed by our platform. In the event of an outage that takes out an entire Azure region, the Event Grid service will already have all of your eventing infrastructure metadata synced to a paired region, and your new events will begin to flow again with no intervention from your side required, avoiding service interruption automatically.

Disaster recovery is generally measured with two metrics:

Event Grid’s automatic failover has different RPO’s and RTO’s for your metadata (event subscriptions, plus more) and data (events). If you need different specification from below, you can still always implement your own client-side failover using the topic health APIs.

  • Metadata RPO: Zero minutes. You read that right. Any time a resource is created in Event Grid, its instantly replicated across regions. In the event of a failover, no metadata is lost.
  • Metadata RTO: Though generally this happens much more quickly, within 60 minutes Event Grid will begin to accept create/update/delete calls for topics and subscriptions.
  • Data RPO: If your system is healthy and caught up on existing traffic at the time of regional failover, the RPO for events is about 5 minutes.
  • Data RTO: Like metadata, this generally happens much more quickly, however within 60 minutes Event Grid will begin accepting new traffic after a regional failover.

Here’s the best part, there is no cost for metadata GeoDR on Event Grid. It is included on the current price of the service and won’t incur any additional charges.

Powering advanced event-driven workloads

As we see more advanced event-driven architectures for diverse scenarios such as IoT, CRM, or financial we’ve noticed an increasing need on expanding our capabilities for multitenant applications and workloads handling bigger amount of data in their events.

Event Domains give you the power to organize your entire eventing infrastructure under a single construct, set fine grain auth rules on each topic for who can subscribe, and manage all event publishing with a single endpoint. Classic pub-sub architectures are built exclusively on topics and subscriptions, however as you build more advanced and hi-fidelity event-driven architectures, the burden on maintenance increases exponentially. Event Domains take the headache out of it by handling much of the management for you.

Today we’re happy to announce that Event Domains are now generally available, and with that, you’ll be able to have 100,000 topics per Domain. Here’s the full set of Event Domains limits with general availability:

  • 100,000 topics per Event Domain
  • 100 Event Domains per Azure Subscription
  • 500 event subscriptions per topic in an Event Domain
  • 50 ‘firehose’ event subscriptions at the Event Domain scope
  • 5,000 events/second into an Event Domain

As always, if these limits don’t suit you, feel free to reach out via support ticket or by emailing askgrid@microsoft.com

so we can get you higher capacity.

We also acknowledge that advanced event-driven architectures don’t always fit in the confines of 64 KB. These workloads require handling larger events for a simpler architecture, and today we’re announcing the public preview of events up to 1MB.

There are no configuration changes required and this will work on existing event subscriptions, and everything under 64 KB will be still be covered by our general availability SLA. To try it out, just push larger events, noticing that events over 64 KB will be charged in 64 KB increments, and the batch size limit for events sent to Event Grid as a JSON array is still 1MB in total.

Simplified management of events

You might have thousands of event subscriptions or, with the general availability of Event Domains, hundreds of thousands of topics floating around your Azure subscription. In order to make searching and managing of these resources easier, we’ve introduced list search and list pagination APIs throughout Event Grid. For more information check out all the details in our, “Azure Event Grid Documentation.”

Advanced filters used to route messages in Event Grid are now generally available, with no restriction on the number of nested objects in your JSON. This allows for more granularity when filtering events before passing it to other services for further processing, reducing compute time, and resources needed by avoiding performing this filtering elsewhere.

If you haven’t played with advanced filters yet, you can use the following operators on any part of the event, making the possibilities nearly endless: StringContains, StringBeginsWith, StringEndsWith, StringIn, StringNotIn, NumberGreaterThan, NumberGreaterThanOrEquals, NumberLessThan, NumberLessThanOrEquals, NumberIn, NumberNotIn, BoolEquals.

Get started today

As always, we love to hear your thoughts, feedback, and wish lists as you get a chance to try out these new features! You can start now with the following resources, and please reach out with your feedback.

  1. Sign up for an Azure free account if you don’t have one yet
  2. Subscribe to IoT Hub device telemetry events with Event Grid
  3. Learn more about using Service Bus as an event handler
  4. Build more powerful multitenant applications with Event Domains
  5. Perform searches and pagination over thousands and thousands of events with these new APIs
  6. Route only the necessary events for processing using advanced filters

Isolate app integrations for stability, scalability, and speed with an integration service environment

$
0
0

Innovation at scale is a common challenge facing large organizations. A key contributor to the challenge is the complexity in coordinating the sheer number of apps and environments.

Integration tools, such as Azure Logic Apps, give you the flexibility to scale and innovate as fast as you want, on-premises or in the cloud. This is a key capability you need to have in place when migrating to the cloud, or even if you're cloud native. Often, integration has been relegated as something to do after the fact. In the modern enterprise, however, application integration is something that has to be done in conjunction with application development and innovation.

An integration service environment is the ideal solution for organizations concerned about noisy neighbor issues, data isolation, or who need more flexibility and configurability than the core Logic Apps service offers.

Building upon the existing set of capabilities, we are releasing a number of new, exciting changes that make integration service environments even better, such as:

    • Faster deployment times by halving the previous provisioning time
    • Higher throughput limits for an individual Logic App and connectors
    • An individual Logic App can now run for up to a year (365 days)

Integration service environment for Logic Apps is the next step for organizations who are pursuing integration as part of their core application development strategy. Here’s what an integration service environment can offer:

    • Direct, secure access to your virtual network resources. Enables Logic Apps to have secure, direct access to private resources, such as virtual machines, servers, and other services in your virtual network including Azure services with service endpoints and on-premises resources via Azure ExpressRoute or site to site VPN.
    • Consistent, highly reliable performance. Eliminates the noisy neighbor issue, removing fear of intermittent slowdowns that can impact business critical processes with a dedicated runtime where only your Logic Apps execute in.
    • Isolated, private storage. Sensitive data subject to regulation is kept private and secure, opening new integration opportunities.
    • Predicable pricing. Provides a fixed monthly cost for Logic Apps. Each integration service environment includes the free usage of one standard integration account and one enterprise connector. If your Logic Apps action execution count exceeds 50 million action executions per month, the integration service environment could provide better value.

New to integration service environments for Logic Apps? Watch this Azure Friday introduction video for assistance.

Get started with an integration service environment for Azure Logic Apps today.

Simplify the management of application configurations with Azure App Configuration

$
0
0

We’re excited to announce the public preview of Azure App Configuration, a new service aimed at simplifying the management of application configuration and feature flighting for developers and IT. App Configuration provides a centralized place in Microsoft Azure for users to store all their application settings and feature flags (a.k.a., feature toggles), control their accesses and deliver the configuration data where it is needed.

Eliminate hard-to-troubleshoot errors across distributed applications

Companies throughout industries are transforming into digital organizations in order to better serve their customers, foster tighter relationships and respond to competition faster. We have witnessed a rapid growth in the numbers of applications our customers have. Modern applications, particularly those running in a cloud, are typically made up of multiple components and distributed in nature. Spreading configuration data across these components often leads to hard-to-troubleshoot errors in production. When a company has a large portfolio of applications, these problems multiply very quickly.

With App Configuration, you can keep your application settings together so that:

  • You have a single consolidated view of all configuration data.
  • You can easily make changes to settings, compare values, and perform rollbacks.
  • You have numerous options to deliver these settings to your application, including injecting them directly into your compute service (e.g., App Service), embedding in a CI/CD pipeline, or retrieving them on-demand inside your code.

App Configuration allows you to maintain control over the configuration data and handle it with confidence.

Increase release velocity with feature flags

One of the core solutions we provide with App Configuration is feature management. Traditionally, a new application feature needs to go through a series of testing before it can be released. This generally leads to long development cycles. Newer software engineering methodologies, such as feature management using feature flags, help shorten the cycles by enabling real test in production while safeguarding the application stability. Feature management solves a multitude of developer challenges especially when building applications for the cloud.

App Configuration provides built-in support for feature management. You can leverage it to remotely control feature availability in your deployed application. While it can be used with any programming language, through its REST APIs, the .NET Core and ASP.NET Core libraries offer a complete end-to-end solution out of the box.

Get started now

App Configuration provides a complete turnkey solution for dealing with application settings and feature flags. It’s easy to onboard and use. You can find the complete documentation at, “Azure App Configuration Preview documentation.” Please give it a try and let us know what you think!


Visual Studio Extensibility Day 2019 was a hit

$
0
0

On Friday, May 10th we hosted both internal and external Visual Studio extension authors in the Workshop room in building 18 on the Microsoft Campus in Redmond. It was a full day event with keynotes and sessions for 60 attendees – half of which attended //build earlier that same week, and half who came just for the Extensibility Day.

Attendees

They were a mix of old VSIP Partners, hobbyists, 1st-party MSFT teams and for-profit 3rd-party extenders. About half the attendees extended their stay for the //build conference to be able to attend the event. The rest came from all over the world and some flew to Redmond just to be able to attend Extensibility Day.

We’ve done similar events in the past exclusively for the VSIP Partners, but this was the first time we invited the larger extensibility community. For a lot of the attendees, this was their first time talking to- and socializing with other extension authors in person.

Agenda

To kick off the event, Corporate Vice President John Montgomery, and Director of Program Management Amanda Silver each gave a keynote. After that, sessions about various extensibility topics filled the rest of the day, including:

All sessions were highly technical and with lots of demos.


Amanda giving her keynote about the future of Visual Studio (photo by @syuheiuda)

After the session ended, we went to the Microsoft Company Store and Visitor Center followed by drinks and dinner at the Boardwalk restaurant in the heart of the Microsoft campus. People stayed until they closed.

Addressing pain points

During the day, the attendees helped identify their main pain points and produced a prioritized list of documentation/samples for us to provide as well as organizing our backlog of features to implement. Then they voted on the priority of each item and these are the results:

Top 5 missing pieces of documentation (in prioritized order):

  1. How to run and write integration tests
  2. How to debug and profile performance and memory issues
  3. DTE or IVs* – which one to use, how and when?
  4. How to access telemetry data collected by VS about our extensions
  5. How to target multiple versions of VS

Top 5 missing features (in prioritized order):

  1. Added integration test tooling
  2. Develop extensions in .NET Core
  3. Define VSCT from code instead of XML
  4. Marketplace extensions should have a private preview feature
  5. Ability to revert extensions to earlier versions

Remote-powered developer tools

A few weeks ago, we announce plans to enable a remote-powered developer experience for Visual Studio. It was met with great interest by the attendees who had a lot of questions as to how it relates to extension development. It’s still early and there are a lot of unknowns for us still to investigate. We’ll make sure to keep everyone in the loop as we know more. Stay tuned on this blog for that information.

Late notice

We sent out the invitation a bit late, so we were afraid that people wouldn’t be able to make it with such short notice. If you were among the people who couldn’t make it, I do apologize and ensure you that next year we send the invitations out much earlier.

Feedback about the event

The attendees filled out an evaluation form online after the event and the feedback was overwhelmingly positive. Everything from the list of sessions, the backlog prioritization, to the food served for lunch all received a top rating.

I really enjoyed visiting the campus and getting a chance to meet the team and other extension authors. I found that hearing about possible future direction for VS, participating in documentation/backlog prioritization and getting a broader sense of the ecosystem all really helped me answer questions about where to head next for CodeMaid. – Steve Cadwallader, author of CodeMaid

There were room for improvements too, and the top suggestions for next time are:

  1. More time to mingle and socialize with fellow extenders
  2. More hands on and Q&A time with the Visual Studio team
  3. Let the attendees vote on what sessions to see at time of registration

To summarize; this was a great event and I hope we can continue to do events like this every year. Thanks to all the attendees for coming and help making the day one to remember for all of us.

The post Visual Studio Extensibility Day 2019 was a hit appeared first on The Visual Studio Blog.

Take your analog data digital for a faster, more efficient way to work

New to Microsoft 365 in May—new tools to streamline compliance and make collaboration inclusive and engaging

AI, Machine Learning and Data Science Roundup: May 2019

$
0
0

A monthly roundup of news about Artificial Intelligence, Machine Learning and Data Science. This is an eclectic collection of interesting blog posts, software announcements and data applications from Microsoft and elsewhere that I've noted over the past month or so.

Open Source AI, ML & Data Science News

PyTorch 1.1 is now available, with new support for Tensorboard and improvements to distributed training and JIT compilation.

JupyterHub 1.0 is released, a milestone for the multi-user Jupyter Notebook server.

matplotlib 3.1 is released, adding several improvements to the Python data visualization library.

Python is now included in Windows 10, with updates available via the Microsoft Store.

FastBert, a simple PyTorch interface for training text classifiers based on the popular language representation model BERT, is released.

torchvision 0.3, the PyTorch library of datasets and tools for computer vision, adds new models for semantic segmentation and object detection.

ML.NET 1.0, the open-source cross-platform machine learning framework for .NET developers, is now available.

R 3.6.0 is released, with many new capabilities and improved memory usage and performance.

Industry News

The Wolfram Engine, featured in Mathematica and including a suite of algorithms for visualization, machine learning, NLP and more, is now available free for development (license required for production).  

Facebook releases Pythia, a new open-source deep learning framework based on PyTorch, for multitasking in the vision and language domain.

Google Cloud AutoML Natural Language provides an interactive UI to classify content and build a predictive ML model without coding.

Google Cloud TPU Pods, cloud-based clusters that can include more than 1,000 TPU chips as an "ML supercomputer", are now publicly available in beta. NVIDIA T4 GPUs are also now generally available in GCP.

Snips open-sources Tract, an embedded neural network inference engine designed for wake-word detection by virtual assistant devices.

Hewlett Packard Enterprises announced plans to acquire Cray, the supercomputer company.

Intel announces several open-source initiatives for AI and cloud technologies, including a Deep Learning Reference Stack optimized for Intel chipsets.

Microsoft News

Azure Machine Learning Service adds new MLOps capabilities, providing version control, audit trails, and packaging, deployment and monitoring support for machine learning models via an Azure DevOps extension. Also, model deployment to FPGA is now generally available in Azure (and additionally in preview for Databox Edge).

Further updates to Azure Machine Learning Service are now in preview, including a new drag-and-drop visual interface, a new form-based UI for automated machine learning, model interpretability, and hosted Python notebooks.

ONNX Runtime 0.4 is released, adding support for Intel and NVIDIA accelerators to further reduce latency for deployed neural networks.

Azure Cognitive Services has added many new capabilities, highlights of which include:

Visual Studio Code adds Remote Development, allowing use of remote Python workspaces over SSH, in Docker containers, and in Windows Subsystem for Linux. Other recent Python support improvements include Intellisense in the console and enhancements to the Python Language Server.

Azure Data Explorer now supports queries with custom Python code, as well as integration with Spark.

Azure SQL Database Edge, a small-footprint data engine with support for Python, R and Spark and optimized for edge devices and time-series data, is now in private preview.

Microsoft announces an end-to-end toolchain for autonomous systems (in preview), which developers can use to simulate and build robots and other AI-driven autonomous devices.

Learning resources

A beginner's tutorial on training a convolutional neural network, using only Python and numpy.

Rules of Machine Learning: Google's list of best practices for developers looking to create applications with machine learning capabilities. 

ODSC suggests 25 public data sets to get started with machine learning, spanning text, images, and tabular data.

Foundations of Data Science, a free book by Avrim Blum, John Hopcroft and Ravi Kannan with a focus on matrix decompositions and associated ML techniques.

Azure Open Datasets, a collection of curated public datasets easily accessible to Azure ML services.

Open Images v5, a large collection of annotated images including segmentation masks for 2.8 million objects in 350 categories, has been released by Google.

Microsoft Learn now offers several modules with free training for AI engineers and data scientists.

Six principles behind health data-driven organizations, a useful resource on building a data science culture by Francesca Lazzeri.

Applications

How Python is used at Netflix for personalization, machine learning, experimentation, statistical analysis and more.

Google develops a method to infer depth maps for video by using "mannequin challenge" videos as training data.

Samsung researchers develop a method of animating a single photograph of a person as a realistic talking head (video).

Google's "Translatotron", a speech-to-speech translation model that translates speech audio into a second language while retaining the original speaking voice.

AI-based applications help children with disabilities bridge language gaps.

Facebook researchers develop a method to synthesize full-body video of a person performing actions animated in real time under joystick control, based only on real source video.

LaLiga and BMW are using the Azure Bot Framework SDK to deliver specialized personal assistant applications.

Find previous editions of the monthly AI roundup here.

Announcing TypeScript 3.5

$
0
0

Today we’re happy to announce the availability of TypeScript 3.5!

If you’re new to TypeScript, it’s a language that builds on JavaScript that adds optional static types. TypeScript code gets type-checked to avoid common mistakes like typos and accidental coercions, and then gets transformed by a program called the TypeScript compiler. The compiler strips out any TypeScript-specific syntax and optionally transforms your code to work with older browsers, leaving you with clean, readable JavaScript that can run in your favorite browser or Node.js. Built on top of all this is also a language service which uses all the type information TypeScript has to provide powerful editor functionality like code completions, find-all-references, quick fixes, and refactorings. All of this is cross-platform, cross-editor, and open source.

TypeScript also provides that same tooling for JavaScript users, and can even type-check JavaScript code typed with JSDoc using the checkJs flag. If you’ve used editors like Visual Studio or Visual Studio Code with .js files, TypeScript powers that experience, so you might already be using TypeScript!

To get started with TypeScript, you can get it through NuGet, or through npm with the following command:

npm install -g typescript

You can also get editor support by

Support for other editors will likely be rolling in in the near future.

Let’s explore what’s new in 3.5!

Speed improvements

TypeScript 3.5 introduces several optimizations around type-checking and incremental builds.

Type-checking speed-ups

Much of the expressivity of our type system comes with a cost – any more work that we expect the compiler to do translates to longer compile times. Unfortunately, as part of a bug fix in TypeScript 3.4 we accidentally introduced a regression that could lead to an explosion in how much work the type-checker did, and in turn, type-checking time. The most-impacted set of users were those using the styled-components library. This regression was serious not just because it led to much higher build times for TypeScript code, but because editor operations for both TypeScript and JavaScript users became unbearably slow.

Over this past release, we focused heavily on optimizing certain code paths and stripping down certain functionality to the point where TypeScript 3.5 is actually faster than TypeScript 3.3 for many incremental checks. Not only have compile times fallen compared to 3.4, but code completion and any other editor operations should be much snappier too.

If you haven’t upgraded to TypeScript 3.4 due to these regressions, we would value your feedback to see whether TypeScript 3.5 addresses your performance concerns!

--incremental improvements

TypeScript 3.4 introduced a new --incremental compiler option. This option saves a bunch of information to a .tsbuildinfo file that can be used to speed up subsequent calls to tsc.

TypeScript 3.5 includes several optimizations to caching how the state of the world was calculated – compiler settings, why files were looked up, where files were found, etc. In scenarios involving hundreds of projects using TypeScript’s project references in --build mode, we’ve found that the amount of time rebuilding can be reduced by as much as 68% compared to TypeScript 3.4!

For more details, you can see the pull requests to

The Omit helper type

Much of the time, we want to create an object that omits certain properties. It turns out that we can express types like that using TypeScript’s built-in Pick and Exclude helpers. For example, if we wanted to define a Person that has no location property, we could write the following:

type Person = {
    name: string;
    age: number;
    location: string;
};

type RemainingKeys = Exclude<keyof Person, "location">;

type QuantumPerson = Pick<Person, RemainingKeys>;

// equivalent to
type QuantumPerson = {
    name: string;
    age: number;
};

Here we “subtracted” "location" from the set of properties of Person using the Exclude helper type. We then picked them right off of Person using the Pick helper type.

It turns out this type of operation comes up frequently enough that users will write a helper type to do exactly this:

type Omit<T, K extends keyof any> = Pick<T, Exclude<keyof T, K>>;

Instead of making everyone define their own version of Omit, TypeScript 3.5 will include its own in lib.d.ts which can be used anywhere. The compiler itself will use this Omit type to express types created through object rest destructuring declarations on generics.

For more details, see the pull request on GitHub to add Omit, as well as the change to use Omit for object rest.

Improved excess property checks in union types

TypeScript has a feature called excess property checking in object literals. This feature is meant to detect typos for when a type isn’t expecting a specific property.

type Style = {
    alignment: string,
    color?: string
};

const s: Style = {
    alignment: "center",
    colour: "grey"
//  ^^^^^^ error! 
};

In TypeScript 3.4 and earlier, certain excess properties were allowed in situations where they really shouldn’t have been. For instance, TypeScript 3.4 permitted the incorrect name property in the object literal even though its types don’t match between Point and Label.

type Point = {
    x: number;
    y: number;
};

type Label = {
    name: string;
};

const thing: Point | Label = {
    x: 0,
    y: 0,
    name: true // uh-oh!
};

Previously, a non-disciminated union wouldn’t have any excess property checking done on its members, and as a result, the incorrectly typed name property slipped by.

In TypeScript 3.5, the type-checker at least verifies that all the provided properties belong to some union member and have the appropriate type, meaning that the sample above correctly issues an error.

Note that partial overlap is still permitted as long as the property types are valid.

const pl: Point | Label = {
    x: 0,
    y: 0,
    name: "origin" // okay
};

The --allowUmdGlobalAccess flag

In TypeScript 3.5, you can now reference UMD global declarations like

export as namespace foo;

from anywhere – even modules – using the new --allowUmdGlobalAccess flag.

This feature might require some background if you’re not familiar with UMD globals in TypeScript. A while back, JavaScript libraries were often published as global variables with properties tacked on – you sort of hoped that nobody picked a library name that was identical to yours. Over time, authors of modern JavaScript libraries started publishing using module systems to prevent some of these issues. While module systems alleviated certain classes of issues, they did leave users who were used to using global variables out in the rain.

As a work-around, many libraries are authored in a way that define a global object if a module loader isn’t available at runtime. This is typically leveraged when users target a module format called “UMD”, and as such, TypeScript has a way to describe this pattern which we’ve called “UMD global namespaces”:

export as namespace preact;

Whenever you’re in a script file (a non-module file), you’ll be able to access one of these UMD globals.

So what’s the problem? Well, not all libraries conditionally set their global declarations. Some just always create a global in addition to registering with the module system. We decided to err on the more conservative side, and many of us felt that if a library could be imported, that was probably the the intent of the author.

In reality, we received a lot of feedback that users were writing modules where some libraries were consumed as globals, and others were consumed through imports. So in the interest of making those users’ lives easier, we’ve introduced the allowUmdGlobalAccess flag in TypeScript 3.5.

For more details, see the pull request on GitHub.

Smarter union type checking

When checking against union types, TypeScript typically compares each constituent type in isolation. For example, take the following code:

type S = { done: boolean, value: number }
type T =
    | { done: false, value: number }
    | { done: true, value: number };

declare let source: S;
declare let target: T;

target = source;

Assigning source to target involves checking whether the type of source is assignable to target. That in turn means that TypeScript needs to check whether S:

{ done: boolean, value: number }

is assignable to T:

{ done: false, value: number } | { done: true, value: number }

Prior to TypeScript 3.5, the check in this specific example would fail, because S isn’t assignable to { done: false, value: number } nor { done: true, value: number }. Why? Because the done property in S isn’t specific enough – it’s boolean whereas each constituent of T has a done property that’s specifically true or false. That’s what we meant by each constituent type being checked in isolation: TypeScript doesn’t just union each property together and see if S is assignable to that. If it did, some bad code could get through like the following:

interface Foo {
    kind: "foo";
    value: string;
}

interface Bar {
    kind: "bar";
    value: number;
}

function doSomething(x: Foo | Bar) {
    if (x.kind === "foo") {
        x.value.toLowerCase();
    }
}

// uh-oh - luckily TypeScript errors here!
doSomething({
    kind: "foo",
    value: 123,
});

So clearly this behavior is good for some set of cases. Was TypeScript being helpful in the original example though? Not really. If you figure out the precise type of any possible value of S, you can actually see that it matches the types in T exactly.

That’s why in TypeScript 3.5, when assigning to types with discriminant properties like in T, the language actually will go further and decompose types like S into a union of every possible inhabitant type. In this case, since boolean is a union of true and false, S will be viewed as a union of { done: false, value: number } and { done: true, value: number }.

For more details, you can see the original pull request on GitHub.

Higher order type inference from generic constructors

In TypeScript 3.4, we improved inference for when generic functions that return functions like so:

function compose<T, U, V>(
    f: (x: T) => U, g: (y: U) => V): (x: T) => V {
    
    return x => g(f(x))
}

took other generic functions as arguments, like so:

function arrayify<T>(x: T): T[] {
    return [x];
}

type Box<U> = { value: U }
function boxify<U>(y: U): Box<U> {
    return { value: y };
}

let newFn = compose(arrayify, boxify);

Instead of a relatively useless type like (x: {}) => Box<{}[]>, which older versions of the language would infer, TypeScript 3.4’s inference allows newFn to be generic. Its new type is <T>(x: T) => Box<T[]>.

TypeScript 3.5 generalizes this behavior to work on constructor functions as well.

class Box<T> {
    kind: "box";
    value: T;
    constructor(value: T) {
        this.value = value;
    }
}

class Bag<U> {
    kind: "bag";
    value: U;
    constructor(value: U) {
        this.value = value;
    }
}


function composeCtor<T, U, V>(
    F: new (x: T) => U, G: new (y: U) => V): (x: T) => V {
    
    return x => new G(new F(x))
}

let f = composeCtor(Box, Bag); // has type '<T>(x: T) => Bag<Box<T>>'
let a = f(1024); // has type 'Bag<Box<number>>'

In addition to compositional patterns like the above, this new inference on generic constructors means that functions that operate on class components in certain UI libraries like React can more correctly operate on generic class components.

type ComponentClass<P> = new (props: P) => Component<P>;
declare class Component<P> {
    props: P;
    constructor(props: P);
}

declare function myHoc<P>(C: ComponentClass<P>): ComponentClass<P>;

type NestedProps<T> = { foo: number, stuff: T };

declare class GenericComponent<T> extends Component<NestedProps<T>> {
}

// type is 'new <T>(props: NestedProps<T>) => Component<NestedProps<T>>'
const GenericComponent2 = myHoc(GenericComponent);

To learn more, check out the original pull request on GitHub.

Smart Select

TypeScript 3.5 provides an API for editors to expand text selections farther and farther outward in a way that is syntactically aware – in other words, the editor knows which constructs it should expand out to. This feature is called Smart Select, and the result is that editors don’t have to resort to heuristics like brace-matching, and you can expect selection expansion in editors like Visual Studio Code to “just work”.

Smart selection in action

As with all of our editing features, this feature is cross-platform and available to any editor which can appropriately query TypeScript’s language server.

Extract to type alias

Thanks to Wenlu Wang (GitHub user @Kingwl), TypeScript supports a useful new refactoring to extract types to local type aliases.

Example of extracting to a type alias

For those who prefer interfaces over type aliases, an issue exists for extracting object types to interfaces as well.

Breaking changes

Generic type parameters are implicitly constrained to unknown

In TypeScript 3.5, generic type parameters without an explicit constraint are now implicitly constrained to unknown, whereas previously the implicit constraint of type parameters was the empty object type {}.

In practice, {} and unknown are pretty similar, but there are a few key differences:

  • {} can be indexed with a string (k["foo"]), though this is an implicit any error under --noImplicitAny.
  • {} is assumed to not be null or undefined, whereas unknown is possibly one of those values.
  • {} is assignable to object, but unknown is not.

The decision to switch to unknown is rooted that it is more correct for unconstrained generics – there’s no telling how a generic type will be instantiated.

On the caller side, this typically means that assignment to object will fail, and methods on Object like toString, toLocaleString, valueOf, hasOwnProperty, isPrototypeOf, and propertyIsEnumerable will no longer be available.

function foo<T>(x: T): [T, string] {
    return [x, x.toString()]
    //           ~~~~~~~~ error! Property 'toString' does not exist on type 'T'.
}

As a workaround, you can add an explicit constraint of {} to a type parameter to get the old behavior.

//             vvvvvvvvvv
function foo<T extends {}>(x: T): [T, string] {
    return [x, x.toString()]
}

From the caller side, failed inferences for generic type arguments will result in unknown instead of {}.

function parse<T>(x: string): T {
    return JSON.parse(x);
}

// k has type 'unknown' - previously, it was '{}'.
const k = parse("...");

As a workaround, you can provide an explicit type argument:

// 'k' now has type '{}'
const k = parse<{}>("...");

{ [k: string]: unknown } is no longer a wildcard assignment target

The index signature { [s: string]: any } in TypeScript behaves specially: it’s a valid assignment target for any object type. This is a special rule, since types with index signatures don’t normally produce this behavior.

Since its introduction, the type unknown in an index signature behaved the same way:

let dict: { [s: string]: unknown };
// Was okay
dict = () => {};

In general this rule makes sense; the implied constraint of “all its properties are some subtype of unknown” is trivially true of any object type. However, in TypeScript 3.5, this special rule is removed for { [s: string]: unknown }.

This was a necessary change because of the change from {} to unknown when generic inference has no candidates. Consider this code:

declare function someFunc(): void;
declare function fn<T>(arg: { [k: string]: T }): void;
fn(someFunc);

In TypeScript 3.4, the following sequence occurred:

  • No candidates were found for T
  • T is selected to be {}
  • someFunc isn’t assignable to arg because there are no special rules allowing arbitrary assignment to { [k: string]: {} }
  • The call is correctly rejected

Due to changes around unconstrained type parameters falling back to unknown (see above), arg would have had the type { [k: string]: unknown }, which anything is assignable to, so the call would have incorrectly been allowed. That’s why TypeScript 3.5 removes the specialized assignability rule to permit assignment to { [k: string]: unknown }.

Note that fresh object literals are still exempt from this check.

const obj = { m: 10 }; 
// okay
const dict: { [s: string]: unknown } = obj;

Depending on the intended behavior of { [s: string]: unknown }, several alternatives are available:

  • { [s: string]: any }
  • { [s: string]: {} }
  • object
  • unknown
  • any

We recommend sketching out your desired use cases and seeing which one is the best option for your particular use case.

Improved excess property checks in union types

As mentioned above, TypeScript 3.5 is stricter about excess property checks on constituents of union types.

We have not witnessed examples where this checking hasn’t caught legitimate issues, but in a pinch, any of the workarounds to disable excess property checking will apply:

  • Add a type assertion onto the object (e.g. { myProp: SomeType } as ExpectedType)
  • Add an index signature to the expected type to signal that unspecified properties are expected (e.g. interface ExpectedType { myProp: SomeType; [prop: string]: unknown })

Fixes to unsound writes to indexed access types

TypeScript allows you to represent the operation of accessing a property of an object via the name of that property:

type A = {
    s: string;
    n: number;
};

function read<K extends keyof A>(arg: A, key: K): A[K] {
    return arg[key];
} 

const a: A = { s: "", n: 0 };
const x = read(a, "s"); // x: string

While commonly used for reading values from an object, you can also use this for writes:

function write<K extends keyof A>(arg: A, key: K, value: A[K]): void {
    arg[key] = value;
}

In TypeScript 3.4, the logic used to validate a write was much too permissive:

function write<K extends keyof A>(arg: A, key: K, value: A[K]): void {
    // ???
    arg[key] = "hello, world";
}
// Breaks the object by putting a string where a number should be
write(a, "n", "oops");

In TypeScript 3.5, this logic is fixed and the above sample correctly issues an error.

Most instances of this error represent potential errors in the relevant code. If you are convinced that you are not dealing with an error, you can use a type assertion instead.

lib.d.ts includes the Omit helper type

TypeScript 3.5 includes a new Omit helper type. As a result, any global declarations of Omit included in your project will result in the following error message:

Duplicate identifier 'Omit'.

Two workarounds may be used here:

  1. Delete the duplicate declaration and use the one provided in lib.d.ts.
  2. Export the existing declaration from a module file or a namespace to avoid a global collision. Existing usages can use an import or explicit reference to your project’s old Omit type.

Object.keys rejects primitives in ES5

In ECMAScript 5 environments, Object.keys throws an exception if passed any non-object argument:

// Throws if run in an ES5 runtime
Object.keys(10);

In ECMAScript 2015, Object.keys returns [] if its argument is a primitive:

// [] in ES6 runtime
Object.keys(10);

This is a potential source of error that wasn’t previously identified. In TypeScript 3.5, if target (or equivalently lib) is ES5, calls to Object.keys must pass a valid object.

In general, errors here represent possible exceptions in your application and should be treated as such. If you happen to know through other means that a value is an object, a type assertion is appropriate:

function fn(arg: object | number, isArgActuallyObject: boolean) {
    if (isArgActuallyObject) {
        const k = Object.keys(arg as object);
    }
}

Note that this change interacts with the change in generic inference from {} to unknown, because {} is a valid object whereas unknown isn’t:

declare function fn<T>(): T;

// Was okay in TypeScript 3.4, errors in 3.5 under --target ES5
Object.keys(fn());

What’s next?

As with our last release, you can see our 3.6 iteration plan document, as well as the feature roadmap page to get an idea of what’s coming in the next version of TypeScript. We’re anticipating 3.6 will bring a better experience for authoring and consuming generators, support for ECMAScript’s private fields proposal, and APIs for build tools to support fast incremental builds and projects references. Also of note is the fact that as of TypeScript 3.6, our release schedule will be switching to a cadence of every 3 months (instead of every 2 months as it has been until this point). We believe this will make it easier for us to validate changes with partner teams and

We hope that this version of TypeScript makes you faster and happier as you code. Let us know what you think of this release on Twitter, and if you’ve got any suggestions on what we can do better, feel free to file an issue on GitHub.

Happy hacking!

– Daniel Rosenwasser and the TypeScript team

The post Announcing TypeScript 3.5 appeared first on TypeScript.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>