Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Announcing F# 5 preview 1

$
0
0

We’re excited to announce that F# 5 preview 1 is now available! Here’s how to get it:

If you’re using Visual Studio on Windows, you’ll need both the .NET 5 preview SDK and Visual Studio Preview installed.

Using F# 5 preview

You can use F# 5 preview via the .NET 5 preview SDK, or through the .NET and Jupyter Notebooks support.

If you’re using the .NET 5 preview SDK, check out a sample repository showing off some of what you can do with F# 5. You can play with each of the features there instead of starting from scratch.

If you’d rather use F# 5 in your own project, you’ll need to add a LangVersion property and add a package reference to the latest FSharp.Core in your project file. It should look something like this:

In future previews, the FSharp.Core reference shouldn’t be required.

Alternatively, if you’re using Jupyter Notebooks and want a more interactive experience, check out a sample notebook that shows the same features, but has a more interactive output.

F# 5 is focused on better interactive and analytical programming

It’s difficult to come up with a “theme” for a programming language release. Whenever we start a new “cycle” and think about what we’d like to do for the next version of F#, what we have in mind is often very different from what we end up shipping. This is caused by numerous factors: designs don’t work out the way we think they would, things are too difficult to implement at an acceptable level of quality, existing customers report problems that are very expensive and time consuming to fix, etc.

This time, things are different. We started with the intention of improving interactive programming with F#, aligning with the recent investments made to support .NET in Jupyter Notebooks. Interactive programming has historically been a strength of F#, but improving that aspect of F# has been neglected for a few years. With interactive programming becoming increasingly important as machine learning and data science rise in popularity, it was clear that improvements had to be made in the overall experience. Many of these improvements are language changes, and we plan on introducing more features in future preview that are aligned with this.

Not every feature that ultimately ships with F# 5 is targeted specifically to make interactive programming better. F# is a general purpose language after all, and language additions or enhancements have broad use cases that go far beyond a particular style of programming. However, our intention is that the full set of F# 5 features combined makes interactive programming better than it is today. Let’s dive in!

Package references for F# scripts

One of the biggest problems with F# scripts today is incorporating packages. This is now easy to do with the new #r "nuget:..." command:

This will download and install the latest JSON.NET package (if it’s not in your package cache), resolve all dependencies, and let you use the library as if it were in a fully-fledged project.

To run this script (assuming you name it script.fsx) in F# interactive, simply type:

dotnet fsi --langversion:preview script.fsx

Note that the --langversion flag is required. This will not be required once F# 5 is released.

Alternatively, if you’re using Jupyter notebooks, simply execute the cell and it will print the result.

In future previews, we will work to make sure the editor experience in IDEs is in good shape. The focus on this first preview has been in hardening the core mechanism and integrating it with Jupyter Notebooks.

Enhanced slicing

Slicing data is critical when doing analytical work on sets of data. To that end, we enhanced F# slicing in three areas.

Consistent behavior for built-in data types

Today, behavior for slicing the built-in FSharp.Core data types (array, list, string, 2D array, 3D array, 4D array) is not consistent. Some edge-case behavior will throw an exception and some won’t. In F# 5 preview, all built-in types now return empty slices for slices that are impossible to generate:

This change could be controversial to some – the variety of opinions on slicing behavior we’ve seen in other languages has shown that slicing behavior is a hotly debated topic – so we want people to try it out early and let us know how they feel about this change in their own code.

Fixed-index slices for 3D and 4D arrays in FSharp.Core

The built-in 3D and 4D array types have always supported slices, but they did not support fixing a particular index (such as the y-dimension in a 3D array). Now they do!

To illustrate this, consider the following 3D array:

z = 0

xy 0 1
0 0 1
1 2 3

z = 1

xy 0 1
0 4 5
1 6 7

What if we wanted to extract the slice [| 4; 5 |] from the array? This is now very simple!

This kind of slice used to not be possible prior to F# 5.

Reverse indexes and slicing from the end

Finally, we have added the concept of a reverse, or “from the end” index. The syntax is ^idx. Here’s how you can an element 1 value from the end of a list:

You can also define reverse indexes for your own types. To do so, you’ll need to implement the following method:

GetReverseIndex: dimension: int -> offset: int

Here’s an example for the Span<'T> type:

We feel that these three enhancements will make slicing data types more convenient in F# 5. What do you think?

The nameof function

The nameof function is a new addition to F#. It’s very useful doing things like logging or validating parameters to functions. Because it uses actual F# symbols instead of string literals, it makes refactoring names over time less difficult.

Consider the following example:

The last line will throw an exception, and the name of the parameter month will be shown in the message.

You can take a name of almost everything in F#, such as:

  • Parameters
  • Functions
  • Classes
  • Modules
  • Namespaces

There are some current restrictions on overloaded methods and type parameters that we are planning on addressing in future previews.

Opening static classes

We’re introducing the ability to “open” a static class as if it were a module or namespace. This applies to any static class in .NET (or any package), or your own F#-defined static class.

There are currently a few unresolved design questions related to shadowing vs. building an overloaded set of methods when combining members from static classes that have the same name. Today, they shadow. That may change in the future. We’re also evaluating if we wish to support opening generic static classes with specific generic substitutions. This kind of change would make the feature very expressive, but it would also be fairly advanced and/or niche.

Applicative computation expressions

Computation expressions (CEs) are used today to model “contextual computations”, or in more FP-friendly terminology, monadic computations. However, they are a more flexible construct than just offering syntax for monads.

F# 5 introduces applicative CEs, which are a slightly different form of CE than what you’re perhaps used to. Applicative CEs allow for significantly more efficient computations provided that every computation is independent, and their results are merely accumulated at the end. When computations are independent of one another, they are also trivially parallelizable. This benefit comes at a restriction, though: computations that depend on previously-computed values are not allowed.

The follow example shows a basic applicative CE for the Result type.

We’re excited to see the clever ways F# programmers will utilize this feature, especially in their own libraries.

If you’re a library author who exposes CEs in their library today, there are some additional considerations you’ll need to be aware of. We recommend that all interested library authors read the summary of new builder methods to determine which to use. We will document these members in the official documentation for CEs once F# 5 is closer to release and the overall design is no longer subject to change.

For consumers of applicative CEs, things aren’t too different from the CEs that you already use. The previously-mentioned restriction around independent computations is the key concept to understand.

The road ahead for F# 5

Despite a number of features being available today, we’re still very much in active development for F# 5. When new features are ready, we’ll release them in the next available .NET 5 preview. If you’re curious about what could be coming next, check out the following links:

Finally, we track all language suggestions in our language suggestions repository. There are quite a lot of suggestions you can learn about, and we encourage you to participate in each discussion.

Because previews are released so that we can get feedback from users, we might make breaking changes from one preview to the next to accommodate feedback we feel deserves a design change. We might also decide to keep a feature in preview for the F# 5 GA release if there is enough feedback that the design isn’t quite right. We encourage you to try these features out and let us know what you feel needs improvement!

Special addendum (2019-03-18): we’re hiring!

Want to help shape F# 5 and future releases? We’re expanding our team in Prague, Czech Republic! If the idea of working on new language and compiler features and improving the overall F# developer experiences sounds interesting, give the job posting a look.

Cheers, and happy hacking!

The post Announcing F# 5 preview 1 appeared first on .NET Blog.


Filesystem SDKs for Azure Data Lake Storage Gen2 now generally available

$
0
0

Since the general availability of Azure Data Lake Storage (ADLS) Gen2 in Feb 2019, customers have been getting insights for their big data analytics workloads at cloud scale. Integration to analytics engines is critical for their analytics workloads, and equally important is the ability to programmatically ingest, manage, and analyze data. This ability is critical for key areas of enterprise data lakes such as data ingestion, event-driven big data platforms, machine learning (ML), and advanced analytics. Programmatic access is possible today using ADLS Gen2 REST APIs, Blob REST APIs, or capabilities via Multi-Protocol Access. As part of our developer ecosystem journey, our goal is to make customer application development for programmatic access easier than ever before.

Towards this goal, we're announcing the general availability of Python, .NET, Java, and JS filesystem SDKs for Azure Data Lake Storage (ADLS) Gen2 in all Azure regions. This includes support for CRUD operations for filesystem, directories, files, and permissions with filesystem semantics for ADLS Gen2. Customers can now use this familiar filesystem programming model to simplify application development for ADLS Gen2. These filesystem SDKs streamline our customers’ ability to ingest, manage, and analyze data for ADLS Gen2 and help them gain insights at cloud scale faster than ever before.

Preview feedback

Many of our customers have tried out the ADLS Gen2 SDK preview builds for their scenarios successfully. Here are some common themes based on preview feedback:

  • The SDK is working seamlessly with new filesystem semantics and has successfully moved key data domains to ADLS Gen2. The SDK expedited the transfer of 450 GB data from ADLS Gen1 to ADLS Gen2 within a few hours. The permissions set up at the root-level directory is working well with hierarchical namespace enabled and all the permissions are propagating perfectly to the child items through the folder hierarchy.
  • The SDK is critical to the way customers orchestrate their deployments.
  • The SDK has helped ingest large amounts of IoT data to be used by data scientists for their analytics workloads. This has been instrumental in providing self-service environments for the researchers with access to their own set of directories.
  • Data ingestion pipelines have used the SDK to integrate drone image data, satellite image data, ground sensor data, and weather data into ADLS Gen2. This helps build custom ML models which generate additional business insights for customers. Customers can use these ML models or aggregate raw data based on their needs and store processed results back into ADLS Gen2.
  • Customers appreciate that the SDK preview feedback has been addressed as part of the preview builds and are eagerly awaiting general availability.
  • Customers have successfully executed various tests including creating and appending files using the ADLS Gen2 SDK and testing reads using the Blob REST API. 

Based on your preview feedback, we have also introduced new APIs for bulk upload that simplifies the experience for larger data writes/appends for ADLS Gen2. Detailed documentation is available in the links below:

PowerShell and CLI will continue to be available for preview globally in all Azure regions.  We will announce General Availability for PowerShell and CLI as soon as we have addressed preview feedback.

Next steps 

We welcome your feedback to continue to enrich the ADLS Gen2 developer experience and thank everyone for their collaboration towards achieving this high value release. We look forward to these strong partnerships in future investments as well for our developer ecosystem journey.

Microsoft Teams at 3: Everything you need to connect with your teammates and be more productive

5 Microsoft 365 features that will excite your users and drive adoption

Hosted App Model

$
0
0

In Windows 10 version 2004, we are introducing the concept of Hosted Apps to the Windows App Model. Hosted apps are registered as independent apps on Windows, but require a host process in order to run. An example would be a script file which requires its host (eg: Powershell or Python) to be installed. By itself, it is just a file and does not have any way to appear as an app to Windows. With the Hosted App Model, an app can declare itself as a host, and then packages can declare a dependency upon that host and are known as hosted apps. When the hosted app is launched, the host executable is then launched with the identity of the hosted app package instead of its own identity. This allows the host to be able to access the contents of the hosted app package and when calling APIs it does so with the hosted app identity.

Background

Modern apps are defined to Windows via signed MSIX packages. A package provides identity, so it is known to the system and contains all the files, assets, and registration information for the app it contains. Many apps have scenarios where they want to host content and binaries, such as extensibility points, from other apps. There are also scenarios where the host app is more of a runtime engine that loads script content. On top of it all, there is a desire to have these hosted apps to look and behave like a separate app on the system – where it has its own start tile, identity, and deep integration with Windows features such as BackgroundTasks, Notifications, and Share. Using the Hosted App Model, a retail kiosk app can easily be rebranded, or a Python or Powershell script can now be treated as a separate app.

Developers attempt to accomplish this today in either of two ways. First, they simply use a shortcut on the desktop to launch the host. But this experience does not have any deep integration with Windows and the shell, as the ‘app’ is the host executable not the script. To get a more deeply integrated experience, the alternative is for developers to create a packaged app that includes the host binaries within the package. While the package would now be a separate app and have the ability for deep Windows integration, this approach is inefficient as each app would need to redistribute the host and can have potential servicing and licensing issues.

The Hosted App Model solves the needs of these hosted apps. The Hosted App Model is dependent upon two pieces, a “Host” which is made available to other apps, and a “Hosted App” that declares a dependency upon the host. When a hosted app is launched, the result is that the host is then running under the identity of the hosted app package, so it can load visual assets, content from the Hosted App package location, and when it calls APIs it does so with the identity declared in the Hosted App. The Hosted App gets the intersection of capabilities declared between the Host and Hosted App – this means that if a Hosted App cannot ask for more capabilities than what the Host provides. In this initial release of the Hosted App Model packaged desktop apps are supported, and we will be expanding support to UWP hosts in future releases.

What is a Host and a Hosted App?

More specifically, a Host is the executable in a package declared by the HostRuntime extension which points to the main executable or runtime process for the hosted app. The HostRuntime extension has an Id attribute, and this identifier is referenced as a dependency by the Hosted App in its package manifest. A host can determine the package identity it is currently running under by referring to the Windows.ApplicationModel.Package.Current api.

A Hosted App is an app that declares a package dependency on a Host, and leverages the HostRuntime Id for activation instead of specifying an Entrypoint executable in its own package. It typically contains content, visual assets, scripts, or binaries that may be accessed by the host. Hosted App packages can be Signed or Unsigned:

  • Signed packages may contain executable files. This is useful in scenarios that have an extension mechanism, allowing the host to load a dll or registered component in the hosted app package.
  • Unsigned packages can only contain non-executable files. This is useful in scenarios where the hostruntime only needs to load images, assets and content such as script files. Unsigned packages must include a special Unsigned Publisher OID in their Identity or they won’t be allowed to register. This prevents unsigned packages from spoofing a signed package identity.

Declaring a Host

Declaring a Host is quite simple. All you need to do is to declare the HostRuntime package extension in your AppxManifest.xml. The HostRuntime extension is package-wide and so is declared as a child of the package element. Below is an excerpt from an example AppxManifest.xml showing the HostRuntime entry that declares an app as a Host with Id “PythonHost.”

Package … xmlns:uap10=”http://schemas.microsoft.com/appx/manifest/uap/windows10/10″>

<Identity Name=”PyScriptEnginePackage”

Publisher=”CN=AppModelSamples”

Version=”1.0.0.0″ />

<Extensions>

<uap10:Extension Category=”windows.hostRuntime”

Executable=”PyScriptEnginePyScriptEngine.exe”

uap10:RuntimeBehavior=”packagedClassicApp”

uap10:TrustLevel=”mediumIL”>

<uap10:HostRuntime Id=”PythonHost” />

</uap10:Extension>

</Extensions>

</Package>

  • hostRuntime – a package-wide extension defining runtime information used when activating a Hosted App.
  • Executable – The executable binary that will be the host process
  • RuntimeBehavior and TrustLevel – A hosted app will run with the definitions expressed in the extension. For example, a hosted app using the Host declared above will run the executable PyScriptEngine.exe, at mediumIL trust level.
  • HostRuntime Id – A unique identifier used to specify a Host in a package. A package can have multiple Host Apps, and each must have a unique HostRuntime Id. This identifier is referenced by the Hosted App.

Declaring a Hosted App

A hosted app must declare a package dependency upon the host, and specify the HostId to use. If the package is unsigned, it must include the Unsigned Publisher OID to ensure the package identity does not conflict with a signed package. Also the TargetDeviceFamily should match the host so it does not attempt to deploy on devices that are not supported by the host. The following is an example of a manifest for a Hosted App that takes a dependency upon the Python host.

<Package … xmlns:uap10=”http://schemas.microsoft.com/appx/manifest/uap/windows10/10″>

<Identity Name=”NumberGuesser”

Publisher=”CN=AppModelSamples, OID.2.25.311729368913984317654407730594956997722=1

Version=”1.0.0.0″ />

<Dependencies>

<TargetDeviceFamily Name=”Windows.Desktop”

MinVersion=”10.0.19041.0″ MaxVersionTested=”10.0.19041.0″ />

<uap10:HostRuntimeDependency Name=”PyScriptEngine” Publisher=”CN=AppModelSamples”

MinVersion=”1.0.0.0″/>

</Dependencies>

 

<Applications>

<Application Id=”NumberGuesserApp”

uap10:HostId=”PythonHost”

uap10:Parameters=”NumberGuesser.py”>

</Application>

</Applications>

</Package>

  • Unsigned Publisher OID2.25.311729368913984317654407730594956997722=1 This identifier is required when a Hosted App will be unsigned. The identifier ensures any unsigned package cannot spoof the identity of a signed package.
  • HostRuntimeDependency – A Hosted App package must declare a HostRuntimeDependency on the Host app. This consists of the Name and Publisher of the Host package, and the min version it depends on. These can be found under the <Identity> element in the Host package. When deployed, if the HostRuntimeDependency cannot be found, the registration fails.
  • HostId – Instead of declaring the usual Executable and EntryPoint for an app or extension, the HostId attribute expresses a dependency on a Host app. As a result, the Hosted App inherits the Executable, EntryPoint and runtime attributes of the Host with the specified HostId. When registered, if the HostId is not found, the deployment fails.
  • Parameters (optional)– parameters that are passed on the command line to the host app. The host needs to know what to do with these parameters, and so there is an implied contract between the host and hosted app.

Dynamic Registration for Unsigned Hosted Apps

One of the advantages of the new HostRuntime is that it enables a host to dynamically register a hosted app package at runtime. This dynamically registered package does not need to be signed. This allows a host to dynamically generate the content and manifest for the hosted app package and then register it. We are working with the new Microsoft Edge browser to take advantage of the Hosted App Model for Progressive Web Apps (PWAs) – converting the web app manifest into an app manifest, package the additional web content into an MSIX package and register it. In this model, a PWA is its own independent app registered to the system even though it is being hosted by Edge.

The new APIs for registering a package are:

  • Management.Deployment.PackageManager.AddPackageByUriAsync() is used for registering an MSIX package
  • Management.Deployment.PackageManager.RegisterPackageByUriAsync() is used for registering a loose file AppxManifest.xml file.

In the case where the hosted app is unsigned, its manifest must meet the following requirements:

  1. The unsigned package cannot contain any Executable attributes in its Application or Extension elements (e.g.: no <Application Executable=…> or <Extension Executable=…>), and it can’t specify any other activation data (Executable, TrustLevel, etc). The Application node only supports the HostId and Parameters elements.
  2. An unsigned package must be a Main package type – it cannot be a Bundle, Framework, Resource or Optional package.

In turn, the host process registering an unsigned hosted app package must meet the following requirements:

  1. The process must have package identity
  2. The process must have the package management capability <rescap:Capability Name=”packageManagement”/>

A Host and Hosted App Examples

Let’s have a look at two examples. The first, WinFormsToastHost, is a Host with a signed Hosted App that shows how to include an extension that is dynamically loaded into the host. The second, NumberGuesser, an example of using python as a host and a script file as a hosted app package. You can find the sample code for both at https://aka.ms/hostedappsample.

WinFormsToastHost

Host

The host in this example is a simple Windows Forms app that displays its package identity, location, and calls the ToastNotification APIs. It also has the capability to load a binary extension from a hosted app package. When run under its own identity, it does not display the extension information. The app is packaged with the Windows App Packaging Project which includes the manifest declarations for being a host.

WinformsToastHost-Extension

The hosted app is a .NET dll that implements an extension mechanism for the host to load. It also includes a packaging project that declares its identity and dependency upon the hostruntime. You will see this identity reflected in the values displayed when the app is run. When registered, the hostruntime has access to the hostedapp’s package location and thus can load the extension.

Running the sample

You can load the source code in Visual Studio as follows:

  1. Open WinformsToastHost.sln in VS2019
  2. Build and deploy WinformsToastHost.Package
  3. Build and deploy HostedAppExtension
  4. Goto Start menu and launch ‘WinformsToastHost’
  5. Goto Start menu and launch ‘Hosted WinformsToastHost Extension‘

Here is a screenshot of the host running. Notice its package identity and path, and the UX for loading an assembly is not available because it is not running as a hosted app.

Screenshot of the WinForms Toast host running.

Now launch the hosted app. Notice the identity and path have changed, and that the UX for dynamically loading an extension assembly is enabled.

Screenshot of the WinForms Toast host running.

When the “Run hosted” button is pressed, you will get a dialog from the binary extension:

Screenshot of a message from the hosted app.

Here is the Task Manager details view showing both apps running at the same time. Notice that the host binary is the executable for both:

Screenshot of Task Manager .

And when clicking on the Show Toast button for each app, the system recognizes the two different identities in the action center:

Screenshot of Show Toast button.

NumberGuesser – A Python Host and Game

The Host

In this example, the host is comprised of 2 projects – first is PyScriptEngine which is wrapper written in C# and makes use of the Python nuget package to run python scripts. This wrapper parses the command line and has the capability to dynamically register a manifest as well as launch the python executable with a path to a script file. The second project is PyScriptEnginePackage which is a Windows App Packaging Project that installs PyScriptEngine and registers the manifest that includes the HostRuntime extension.

The Hosted App

The Hosted App is made up of a python script, NumberGuesser.py, and visual assets. It doesn’t contain any PE files. It has an app manifest where the declarations for HostRuntimeDependency and HostId are declared that identifies PyScriptEngine as its Host. The manifest also contains the Unsigned Publisher OID entry that is required for an unsigned package.

Running the sample

To run this sample you first need to build and deploy the host, then you can use the host from the commandline to dynamically register the hosted app.

  1. Open PyScriptEngine.sln solution in Visual Studio
  2. Set PyScriptEnginePackage as the Startup project
  3. Build PyScriptEnginePackage
  4. Deploy PyScriptEnginePackage
  5. Because the host app declares an appexecutionalias, you will be able to go to a command prompt and run “pyscriptengine” to get the usage notice:

C:reposAppModelSamplesSamplesHostedAppsPython-NumberGuesser>pyscriptengine
PyScriptEngine.exe, a simple host for running Python scripts.
See https://github.com/microsoft/AppModelSamples for source.

Usage:

To register a loose package:

PyScriptEngine.exe -Register <AppXManifest.xml>

To register an MSIX package:

PyScriptEngine.exe -AddPackage <MSIX-file> [-unsigned]

The optional -unsigned parameter is used if the package is unsigned.
In this case, the package cannot include any executable files; only
content files (like .py scripts or images) for the Host to execute.

To run a registered package, run it from the Start Menu.

6. Use the python host to register the NumberGuesser game from the commandline:

C:reposAppModelSamplesSamplesHostedAppsPython-NumberGuesser>pyscriptengine -register .NumberGuesserAppxManifest.xml -unsigned
PyScriptEngine.exe, a simple host for running Python scripts.
See https://github.com/microsoft/AppModelSamples for source.

Installing manifest file:///C:/repos/AppModelSamples/Samples/HostedApps/Python-NumberGuesser/NumberGuesser/AppxManifest.xml…

Success! The app should now appear in the Start Menu.

7. Now, click on “Number Guesser (Manifest)” in your start menu, and run the game! See how many tries it takes you to guess the number:

Screenshot of Number Guesser (Manifest)”

Let’s confirm what is running. Notice how PyScriptEngine is executing under the package identity of NumberGuesser!

Screenshot of PyScriptEngine is executing under the package identity of NumberGuesser.

Wrapping it up

In summary, we are pleased to bring you more power and features to the windows platform, and we are excited to see what creative ideas you have for the Hosted App Model. In addition to Microsoft Edge, we are working with teams across the company and expect to see more apps leveraging the Hosted App Model in the future.

The post Hosted App Model appeared first on Windows Developer Blog.

Python in Visual Studio Code – March 2020 Release

$
0
0


We are pleased to announce that the
March 2020 release of the Python Extension for Visual Studio Code is now available. You can download the Python extension from the Marketplace, or install it directly from the extension gallery in Visual Studio Code. If you already have the Python extension installed, you can also get the latest update by restarting Visual Studio Code. You can learn more about  Python support in Visual Studio Code in the documentation.  

This release is focused mostly on product quality. We closed a total of 66 issues, 43 of them being bug fixes.  But we’re also pleased to include brand-new Python debuggerdebugpy 

If you’re interested, you can check the full list of improvements iour changelog. 

New Debugger 

We’re excited to announce that in this release we’re including a new debugger, debugpy. The debugger team has put a lot of effort into making it a faster and even more reliable Python debuggerAlong with the debugger, a new feature also comes: an easier configuration experience to attach the debugger to local processes. 

Attaching to local processes 

Sometimes you may want to attach the debugger to a Python process that is running on your machinebut that can be tricky if, for example, you don’t have control over the application that launched that process 

We made it easy to be done with our new configuration experience for attaching the debugger to local processes.  

If you don’t have a launch.json file on your workspace folder, you can simply start a debug session (by pressing F5 or through Run Start Debugging) and you’ll be presented with a list of debug configuration options. When you select “Attach using Process ID”, it will display a list of processes running locally on your machine: 

Configuration options for debugger.

Alternatively, if you already have a launch.json file on your workspace folder, you can add a configuration to it by clicking on the “Add configuration…” option under the drop-down menu in the Run viewlet: Adding a configuration from the debug viewlet

Then when you select “Python”, you’ll be presented with the same configuration options as above: 

Adding a configuration for attaching to a local process

Selecting the “Attach using Process ID” option from the debug configuration menu adds the below configuration to the existing launch.json file: 

{ 

            "name": "Python: Attach using Process Id", 

            "type": "python", 

            "request": "attach", 

            "processId": "${command:pickProcess}" 

}

When you start a debug session with this configuration selecteda list of processes to which you can attach the debugger will be displayed, and once you pick one, the debugger will attempt to attach to it: 

Selecting a process to attach the debugger

You can also filter the processes by ID, file name or interpreter name: 

Filtering the processes view by file name and process ID

Alternatively, if you already know the ID of the process to which you wish to attach the debugger, you can simply add the value directly on the configuration. For example, to attach to a process of ID 1796, you can simply use the below configuration: 

{ 

            "name": "Python: Attach using Process Id", 

            "type": "python", 

            "request": "attach", 

            "processId": 1796 

}

For more information about debugpy such as how to transition from ptvsd, API changes, CLI references, allowed debug configurations and more, you can check debugpy’s wiki page.   

Other Changes and Enhancements 

We have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include: 

  • Remove extra lines at the end of the file when formatting with Black. (#1877) 
  • Support scrolling beyond the last line in the notebook editor and the interactive window. (#7892) 
  • Added a command to allow users to select a kernel for a Notebook. (#9228) 
  • Show quickfixes for launch.json. (#10245) 
  • Update Jedi to 0.16.0. (#9765) 

We’re constantly A/B testing new features. If you see something different that was not announced by the team, you may be part of the experiment! To see if you are part of an experiment, you can check the first lines in the Python extension output channel. If you wish to opt-out of A/B testing, you can open the user settings.json file (View Command Palette… and run Preferences: Open Settings (JSON)) and set the “python.experiments.enabled” setting to false 

Be sure to download the Python extension for Visual Studio Code now to try out the above improvements. If you run into any problems, please file an issue on the Python VS Code GitHub page. 

 

The post Python in Visual Studio Code – March 2020 Release appeared first on Python.

Helping our developers stay productive while working remotely

OData Connected Service version 0.6.0 Release

$
0
0

OData Connected Service 0.6.0 has been released and is available in the Visual Studio Marketplace.
The new version adds the following features:

  1. Custom Http headers. 
  2. Generation of multiple files.
  3. Writing metadata to a file.

Custom Http Headers 

This feature allows you to add headers that will be sent with the request that fetches the metadata used in generating proxy files. This is actually important when using protected metadata to generate proxy files. To access the metadata endpoint, you may be required to pass authorization headers as shown below. These values are not stored within the OData Connected Service or in your computer and you are required to provide them anytime you update the OData Connected Service.  

HeaderKey1: HeaderValue1
HeaderKey2: HeaderValue2
HeaderKey3: HeaderValue3

Ensure that each header is on a separate line. 

To use this feature, follow the following simple steps:

  1. Right click on the project you are working on from the solution explorer.
  2. Select Add->Connected Service from the context menu.
  3. From the Connected Service Window that opens, select the Microsoft OData Connected Service.
  4. On the wizard window, you will see a provision to input your Custom Headers. Input your headers using the above structure. 

Image configodataendpoint

Generation of Multiple Files

This feature allows OData connected service to generate multiple files. Currently, OData Connected Service generates only one file. The size of this file varies depending on the size of your service’s metadata. If you have a huge metadata, then the generated file will be very huge and may affect the performance of your application. This feature works by generating a file for each entity type and enum type in your service’s metadata file.

If you have the OData Connected Service extension installed,

  1. Right click on the project you are working on from the solution explorer.
  2. Select Add->Connected Service from the context menu.
  3. From the Connected Service Window that opens, select the Microsoft OData Connected Service.
  4. On the wizard window, configure your service endpoint by providing the service name and the OData URL endpoint then click Next.
  5. On the Next page, click on the “AdvancedSettings” link.

Check the checkbox beside the “Generate multiple files” configuration on this page then click finish.

Image advancedsettings

The generated files appear like below: 

Image multiplefiles

Writing metadata to a file

Currently OData Connected Service writes an endpoint’s metadata to a string in the generated file. This makes the generated file unnecessarily huge and difficult to handle in cases where the metadata is huge. With this feature, an endpoint’s metadata will be written to a file and loaded when the DataServiceContext is initiliazed.  This file is located within the connected service project folder and is called Csdl.xml. 

There are more features and fixes coming to OData Connected Service soon, so stay tuned for upcoming releases.

The post OData Connected Service version 0.6.0 Release appeared first on OData.


Update on Stable channel releases for Microsoft Edge

$
0
0

In light of current global circumstances, the Microsoft Edge team is pausing updates to the Stable channel for Microsoft Edge. This means that Microsoft Edge 81 will not be promoted to Stable until we resume these updates.

We are making this change to be consistent with the Chromium project, which recently announced a similar pause due to adjusted schedules, and out of a desire to minimize additional impact to web developers and organizations that are similarly impacted.

We will continue to deliver security and stability updates to Microsoft Edge 80. Preview channels (Canary, Dev, and Beta) will continue to update on their usual schedule.

As the situation evolves, we will post updates here and on our Twitter channel.

The post Update on Stable channel releases for Microsoft Edge appeared first on Microsoft Edge Blog.

Our commitment to customers and Microsoft cloud services continuity

$
0
0

Over the past several weeks, all of us have come together to battle the global health pandemic. During this time, organizations around the world are adjusting the way they manage their daily work and how their workforce continues in the face of extraordinary changes to their professional and personal lives.

With this blog we wanted to share a bit about what we have learned over the last few weeks, resources to help organizations manage through these times, support for critical first responders and emergency organizations, and the criteria we have put in place to manage cloud services capacity to support critical operations. 

We will continue to communicate regularly and openly, so you can have insight into what we are seeing, learning and doing.

As companies operationalize to address new and unique challenges, we have mobilized our global response plan to help customers stay up and running during this critical time. We are actively monitoring performance and usage trends 24/7 to ensure we are optimizing our services for customers worldwide, while accommodating new demand. We are working closely with first responder organizations and critical government agencies to ensure we are prioritizing their unique needs and providing them our fullest support. We are also partnering with governments around the globe to ensure our local datacenters have on-site staffing and all functions are running properly.

In response to health authorities emphasizing the importance of social distancing, we are supporting many large-scale corporations, schools, and governments in the mobilization of remote workforces. Microsoft Teams is helping millions of people adapt to remote work. Organizations have been using Dynamics 365 Customer Service to help contact center employees provide consistent, personalized support while working remotely. Ensuring government and organizational functions can continue while keeping safe distances is critical to our society today.

As demand continues to grow, if we are faced with any capacity constraints in any region during this time, we have established clear criteria for the priority of new cloud capacity. Top priority will be going to first responders, health and emergency management services, critical government infrastructure organizational use, and ensuring remote workers stay up and running with the core functionality of Teams. We will also consider adjusting free offers, as necessary, to ensure support of existing customers. 

We will continue to communicate with customers proactively and transparently about our cloud policies through the Microsoft Trust Center and we are committed to supporting every customer through this difficult period. 

These are certainly unprecedented and challenging times. It is not business as usual. But, together, we can and will get through this. We will be back in touch soon. In the meantime, if you have any immediate questions or needs, please refer to the following resources.

Our commitment to customers and Microsoft cloud services continuity

How Azure Machine Learning service powers suggested replies in Outlook

$
0
0

Microsoft 365 applications are so commonplace that it’s easy to overlook some of the amazing capabilities that are enabled with breakthrough technologies, including artificial intelligence (AI). Microsoft Outlook is an email client that helps you work efficiently with email, calendar, contacts, tasks, and more in a single place.

To help users be more productive and deliberate in their actions while emailing, the web version of Outlook and the Outlook for iOS and Android app have introduced suggested replies, a new feature powered by Azure Machine Learning service. Now when you receive an email message that can be answered with a quick response, Outlook on the web and the Outlook mobile suggest three response options that you can use to reply with only a couple of clicks or taps, helping people communicate in both their workplace and personal life, by reducing the time and effort involved in replying to an email.

11

The developer team behind suggested replies is comprised of data scientists, designers, and machine learning engineers with diverse backgrounds who are working to improve the lives of Microsoft Outlook users by expediting and simplifying communications. They are at the forefront of applying cutting-edge natural language processing (NLP) and machine learning (ML) technologies and leverage these technologies to understand how users communicate through email and improve those interactions from a productivity standpoint to create a better experience for users.

A peek under the hood

To process the massive amount of raw data that these interactions provide, the team uses Azure Machine Learning pipelines to build their training models. Azure Machine Learning pipelines allow the team to divide training steps into discrete steps such as data cleanup, transforms, feature extraction, training, and evaluation. The output of the Azure Machine Learning pipeline converts raw data into a model. This Machine Learning pipeline allows the data scientists to build a training pipeline in a compliant manner that enforces privacy and compliance checks.

22

In order to train this model, the team needed a way to build and prepare a large data set comprised of over 100 million messages. To do this, the team leveraged a distributed processing framework to sample and retrieve data from a broad user base.

Azure Data Lake Storage is used to store the training data used for training the suggested replies models. We then clean and curate the data into message reply pairs (including potential responses to an email) that are stored in Azure Data Lake Storage (ADLS). The training pipelines also consume the reply pairs stored in ADLS in order to train models. To conduct the Machine Learning training itself, the team uses GPU pools available in Azure. The training pipelines leverage these curated Message Reply pairs to learn how to suggest appropriate replies based on a given message. Once the model is created, data scientists can compare the model performance with previous models and evaluate which approaches perform better at recommending relevant suggested replies.

The Outlook team helps protect your data by using the Azure platform to prepare large-scale data sets that are required to build a feature like suggested replies in accordance with Office 365 compliance standards. The data scientists use Azure compute and workflow solutions that enforce privacy policies to create experiments and train multiple models on GPUs. This helps with the overall developer experience and provides agility in the inner development loop cycle.

This is just one of many examples of how Microsoft products are powered by the breakthrough capabilities of Azure AI to create better user experiences. The team is learning from feedback every day and improving the feature for users while also expanding the types of suggested replies offered. Keep following the Azure blog to stay up-to-date with the team and be among the first to know when this feature is released.

Learn more

Learn more about the Azure Machine Learning service.

Get started with a free trial of Azure Machine Learning service.

Bing adopts schema.org markup for special announcements for COVID-19

$
0
0
Bing is adding new features to help keep everyone up to date on the latest special announcements related to the COVID-19 pandemic. In addition to our previously announced experiences for finding tallies of cases in different geographic regions, we will add announcements of special hours and closures for local businesses, information on risk assessment and testing centers, and travel restrictions and guidelines.

 

SpecialAnnoucement schema markup for government health agencies

Bing may consume case statistics from government health agencies at the country, state or province, administrative area, and city level that use the schema.org markup for diseaseSpreadStatistics associated with a SpecialAnnouncement. These statistics are used on bing.com/covid and other searches for COVID-19 statistics. As a government agency determining whether to use this tag for your webpages, consider whether it meets the following criteria, which are characteristics we consider when selecting case statistics to include:
 
  • Your site must be the official government site reporting case statistics for your region.
  • Information in the markup must be up-to-date and consistent with statistics displayed to the general public from your site.
  • Your special announcement must include the date it was posted, indicating the time at which the statistics were first reported.

SpecialAnnoucements schema markup for COVID-19 related business updates 

Bing may consume special announcements from local businesses, hospitals, schools, government offices, and more that use the schema.org markup for SpecialAnnouncement. A label showing your special announcements related to the COVID-19 pandemic with a link to your site for more details may be used on web results for your official website and in local listings shown on the SERP or map experiences. This provides an easy link for your customers and community to find your latest information. When determining whether to use this tag for your webpages, consider whether it meets the following criteria, which are characteristics we consider when selecting special announcements to display:
 
  • The special announcements must be posted on your official website and refer only to changes related to COVID-19 for your own business, hospital, school, or government office.
  • The name of the special announcement must be easily identified within the body of the special announcement page on your site.
  • Your special announcement must include the date it was posted and should also include the time the announcement expires, if appropriate.

SpecialAnnoucement schema markup for risk assessment and testing centers

Bing may consume information on risk assessments and testing centers from healthcare providers and government health agencies that use the schema.org markup for gettingTestedInfo and CovidTestingFacility. Searches for nearby testing information may include information on how to get assessed to see whether getting tested is recommended and, if so, how to locate a nearby testing facility and find instructions for getting tested at that center. When determining whether to use this tag for your webpages, consider whether it meets the following criteria, which are characteristics we consider when selecting testing information to display:
 
  • Your site must be an official site for a well-known healthcare facility or government health agency.
  • gettingTestedInfo must refer to a webpage that specifies what assessment is required prior to being tested at the given testing location.
  • The testing facility information must refer to URLs and facility locations already associated with your provider or agency. Listing other providers’ facilities is not supported at this time.

SpecialAnnoucement schema markup for travel restrictions

Bing may consume information on travel restrictions from government agencies, travel agencies, airlines, hotels, and other travel providers that use the schema.org markup for travelBans and publicTransportClosuresInfo. Travel related searches may include information on updated hours, closures, and guidelines for travel. When determining whether to use this tag for your webpages, consider whether it meets the following criteria, which are characteristics we consider when selecting travel restrictions to display:
 
  • Your site must be an official site for a well-known government agency, travel agency, airline, hotel, or other travel provider.
  • The special announcement including the travel ban or public transport closure info must specify the location covered by the announcement.
  • The name of the special announcement must be easily identified within the body of the special announcement page describing the ban or closure info on your site.
  • Your special announcement must include the date it was posted and should also include the time the announcement expires, if appropriate.
More information on how to implement and use these tags can be found at https://schema.org/SpecialAnnouncement and Bing Webmaster special announcement specifications.

 

Visual Studio 2019 for Mac version 8.5 is now available

$
0
0

Are you ready for the latest version of Visual Studio 2019 for Mac? If so, version 8.5 is available for you to download today! With this release, we’ve continued to polish the existing experience, paying close attention to problem areas mentioned by our users. You’ll also find authentication templates available for ASP.NET Core projects and support for the latest Azure Functions version 3.0.

 

Use authentication in your ASP.NET Core apps

Many of our ASP.NET Core developers have requested that we bring the ability to easily create ASP.NET Core apps with authentication to Visual Studio for Mac. Now, when you create a new ASP.NET Core project that supports No Authentication or Individual Authentication using an in-app store, you’ll encounter an additional screen in the new project creation wizard. Please give this new feature a try and let us know any feedback you have.

New project wizard showing creation of an ASP.NET Core project with authentication options..

You spoke, we listened

We’ve been working hard to address issues our users encounter in Visual Studio for Mac in their average day. As part of our efforts in improving the overall experience, we’ve released a handful of new changes that address some of the concerns we’ve heard. I’m personally excited about fixes in the Unit Test explorer that resulted in a closer resemblance to Windows with regards to unit test nesting and improvements in the debugger that include the ability to edit function breakpoints and better stepping performance. We’ve also fixed an issue that showed duplicate entries for launchSettings.json and appsettings.json in the solution explorer.

Stay up to date

In addition to updating the NuGet distribution in Visual Studio for Mac with version 5.4, the latest version also found in Visual Studio, we’ve also fixed issues where NuGet packages would fail to update and made the NuGet Package Manager much more accessible with more logical focus order and improved VoiceOver and keyboard navigation.

Visual Studio 2019 for Mac version 8.5 also brings official support for Azure Functions 3.0, allowing you to build and deploy functions with the 3.0 runtime. You’ll find templates to help you get started with serverless computing under the Cloud > General section in the create new project dialog. You can follow the tutorial here to get started with your first Azure Function in Visual Studio for Mac.

New project wizard showing creation of an Azure Functions project.

Update to the latest today!

We hope you enjoy Visual Studio 2019 for Mac 8.5 as much as we enjoyed working on it. To update to this version, you can download the installer from the Visual Studio for Mac website or use the in-product updater to update an existing installation.

If you have any feedback on this, or any version of Visual Studio for Mac, please leave it in the comments below this post or reach out to us on Twitter at @VisualStudioMac. If you run into any issues while using Visual Studio for Mac, you can use Report a Problem to notify the team. In addition to reports on issues in the product, we’d also appreciate hearing from you on what’s important to you via feature suggestions on the Visual Studio Developer Community website.

 

 

The post Visual Studio 2019 for Mac version 8.5 is now available appeared first on Visual Studio Blog.

Easily adding Security Headers to your ASP.NET Core web app and getting an A grade

$
0
0

Well that sucks.

Score of F on SecurityHeaders.com

That's my podcast website with an F rating from SecurityHeaders.com. What's the deal? I took care of this months ago!

Turns out, recently I moved from Windows to Linux on Azure.

If I am using IIS on Windows, I can (and did) make a section in my web.config that looks something like this.

Do note that I've added a few custom things and you'll want to make sure you DON'T just copy paste this. Make yours, yours.

Note that I've whitelisted a bunch of domains to make sure my site works. Also note that I have a number of "unsafe-inlines" that are not idea.

<configuration>

<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Strict-Transport-Security" value="max-age=31536000"/>
<add name="X-Content-Type-Options" value="nosniff"/>
<add name="X-Xss-Protection" value="1; mode=block"/>
<add name="X-Frame-Options" value="SAMEORIGIN"/>
<add name="Content-Security-Policy" value="default-src https:; img-src * 'self' data: https:; style-src 'self' 'unsafe-inline' www.google.com platform.twitter.com cdn.syndication.twimg.com fonts.googleapis.com; script-src 'self' 'unsafe-inline' 'unsafe-eval' www.google.com cse.google.com cdn.syndication.twimg.com platform.twitter.com platform.instagram.com www.instagram.com cdn1.developermedia.com cdn2.developermedia.com apis.google.com www.googletagservices.com adservice.google.com securepubads.g.doubleclick.net ajax.aspnetcdn.com ssl.google-analytics.com az416426.vo.msecnd.net/;"/>
<add name="Referrer-Policy" value="no-referrer-when-downgrade"/>
<add name="Feature-Policy" value="geolocation 'none';midi 'none';notifications 'none';push 'none';sync-xhr 'none';microphone 'none';camera 'none';magnetometer 'none';gyroscope 'none';speaker 'self';vibrate 'none';fullscreen 'self';payment 'none';"/>
<remove name="X-Powered-By" />
<remove name="X-AspNet-Version" />
<remove name="Server" />
</customHeaders>
</httpProtocol>
...

But, if I'm NOT using IIS - meaning I'm running my ASP.NET app in a container or on Linux - this will be ignored. Since I recently moved to Linux, I assumed (my bad for no tests here) that it would just work.

My site is hosted on Azure App Service for Linux, so I want these headers to be output the same way. There are several great choices in the form of Open Source NuGet libraries to help. If I use the ASP.NET Core middleware pipeline then these headers will be output and work the SAME on both Windows AND Linux.

I'll be using the NWebsec Security Libraries for ASP.NET Core. They offer a simple fluent way to add the headers I want.

TO BE CLEAR: Yes I, or you, can add these headers manually with AddHeader but these simple libraries ensure that our commas and semicolons are correct. They also offer a strongly typed middleware that is fast and easy to use.

Taking the same web.config above and translating it to Startup.cs's Configure Pipeline with NWebSec looks like this:

app.UseHsts(options => options.MaxAge(days: 30));

app.UseXContentTypeOptions();
app.UseXXssProtection(options => options.EnabledWithBlockMode());
app.UseXfo(options => options.SameOrigin());
app.UseReferrerPolicy(opts => opts.NoReferrerWhenDowngrade());

app.UseCsp(options => options
.DefaultSources(s => s.Self()
.CustomSources("data:")
.CustomSources("https:"))
.StyleSources(s => s.Self()
.CustomSources("www.google.com","platform.twitter.com","cdn.syndication.twimg.com","fonts.googleapis.com")
.UnsafeInline()
)
.ScriptSources(s => s.Self()
.CustomSources("www.google.com","cse.google.com","cdn.syndication.twimg.com","platform.twitter.com" ... )
.UnsafeInline()
.UnsafeEval()
)
);

There is one experimental HTTP header that NWebSec doesn't support (yet) called Feature-Policy. It's a way that your website can declare at the server-side "my site doesn't allow use of the webcam." That would prevent a bad guy from injecting local script that uses the webcam, or some other client-side feature.

I'll do it manually both to make the point that I can, but also that you aren't limited by your security library of choice.

NOTE: Another great security library is Andrew Lock's NetEscapades that includes Feature-Policy as well as some other great features.

Here's my single Middleware that just adds the Feature-Policy header to all responses.

//Feature-Policy

app.Use(async (context, next) =>
{
context.Response.Headers.Add("Feature-Policy", "geolocation 'none';midi 'none';notifications 'none';push 'none';sync-xhr 'none';microphone 'none';camera 'none';magnetometer 'none';gyroscope 'none';speaker 'self';vibrate 'none';fullscreen 'self';payment 'none';");
await next.Invoke();
});

Now I'll commit, build, and deploy (all automatic for me using Azure DevOps) and scan the site again:

Score of A on SecurityHeaders.com

That was pretty straightforward and took less than an hour. Your mileage may vary but that's the general idea!


Sponsor: Protect your apps from reverse engineering and tampering with PreEmptive, makers of Dotfuscator. Dotfuscator has been in-the-box with Microsoft Visual Studio since 2003. Mention HANSELMAN for savings on a professional license!



© 2020 Scott Hanselman. All rights reserved.
     

Microsoft powers transformation at NVIDIA’s GTC Digital Conference

$
0
0

The world of supercomputing is evolving. Work once limited to high-performance computing (HPC) on-premises clusters and traditional HPC scenarios, is now being performed at the edge, on-premises, in the cloud, and everywhere in between. Whether it’s a manufacturer running advanced simulations, an energy company optimizing drilling through real-time well monitoring, an architecture firm providing professional virtual graphics workstations to employees who need to work remotely, or a financial services company using AI to navigate market risk, Microsoft’s collaboration with NVIDIA makes access to NVIDIA graphics processing units (GPU) platforms easier than ever.

These modern needs require advanced solutions that were traditionally limited to a few organizations because they were hard to scale and took a long time to deliver. Today, Microsoft Azure delivers HPC capabilities, a comprehensive AI platform, and the Azure Stack family of hybrid and edge offerings that directly address these challenges.

This year during GTC Digital, we’re spotlighting some of the most transformational applications powered by NVIDIA GPU acceleration that highlight our commitment to edge, on-prem, and cloud computing. Registration is free, so sign up to learn how Microsoft is powering transformation.

Visualization and GPU workstations

Azure enables a wide range of visualization workloads, which are critical for desktop virtualization as well as professional graphics such as computer-aided design, content creation, and interactive rendering. Visualization workloads on Azure are powered by NVIDIA’s world-class GPUs and Quadro technology, the world’s preeminent visual computing platform. With access to graphics workstations on Azure cloud, artists, designers, and technical professionals can work remotely, from anywhere, and from any connected device. See our NV-Series virtual machines (VMs) for Windows and Linux.

Artificial intelligence

We’re sharing the release of the updated execution provider in ONNX Runtime with integration for NVIDIA TensorRT 7. With this update, ONNX Runtime can execute open Open Neural Network Exchange (ONNX) models on NVIDIA GPUs on Azure cloud and at the edge using the Azure Stack Edge, taking advantage of the new features in TensorRT 7 like dynamic shape, mixed precision optimizations, and INT8 execution.

Dynamic shape support enables users to run variable batch size, which is used by ONNX Runtime to process recurrent neural network (RNN) and bit error test rate (BERT) models. Mixed precision and INT8 execution are used to speed up execution on the GPU, which enables ONNX Runtime to better balance the performance across CPU and GPU. Originally released in March 2019, TensorRT with ONNX Runtime delivers better inferencing performance on the same hardware when compared to generic GPU acceleration.

Additionally, the Azure Machine Learning service now supports RAPIDS, a high-performance GPU execution accelerator for data science framework using the NVIDIA CUDA platform. Azure developers can use RAPIDS in the same way they currently use other machine learning frameworks, and in conjunction with Pandas, Scikit-learn, PyTorch, and TensorFlow. These two developments represent major milestones towards a truly open and interoperable ecosystem for AI. We’re working to ensure these platform additions will simplify and enrich those developer experiences.

Edge

Microsoft provides various solutions in the Intelligent Edge portfolio to empower customers to make sure that machine learning not only happens in the cloud but also at the edge. The solutions include Azure Stack Hub, Azure Stack Edge, and IoT Edge.

Whether you are capturing sensor data and inferencing at the Edge or performing end-to-end processing with model training in Azure and leveraging the trained models at the edge for enhanced inferencing operations Microsoft can support your needs however and wherever you need to.

Supercomputing scale

Time-to-decision is incredibly important with a global economy that is constantly on the move. With the accelerated pace of change, companies are looking for new ways to gather vast amounts of data, train models, and perform real-time inferencing in the cloud and at the edge. The Azure HPC portfolio consists of purpose-built computing, networking, storage, and application services to help you seamlessly connect your data and processing needs with infrastructure options optimized for various workload characteristics.

Azure Stack Hub announced preview

Microsoft, in collaboration with NVIDIA, is announcing that Azure Stack Hub with Azure NC-Series Virtual Machine (VM) support is now in preview. Azure NC-Series VMs are GPU-enabled Azure Virtual Machines available on the edge. GPU support in Azure Stack Hub unlocks a variety of new solution opportunities. With our Azure Stack Hub hardware partners, customers can choose the appropriate GPU for their workloads to enable Artificial Intelligence, training, inference, and visualization scenarios.

Azure Stack Hub brings together the full capabilities of the cloud to effectively deploy and manage workloads that otherwise are not possible to bring into a single solution. We are offering two NVIDIA enabled GPU models during the preview period. They are available in both NVIDIA V100 Tensor Core and NVIDIA T4 Tensor Core GPUs. These physical GPUs align with the following Azure N-Series VM types as follows:

  • NCv3 (NVIDIA V100 Tensor Core GPU): These enable learning, inference and visualization scenarios. See Standard_NC6s_v3 for a similar configuration.
  • TBD (NVIDIA T4 Tensor Core GPU): This new VM size (available only on Azure Stack Hub) enables light learning, inference, and visualization scenarios.

Hewlett Packard Enterprise is supporting the Microsoft GPU preview program as part of its HPE ProLiant for Microsoft Azure Stack Hub solution.“The HPE ProLiant for Microsoft Azure Stack Hub solution with the HPE ProLiant DL380 server nodes are GPU-enabled to support the maximum CPU, RAM, and all-flash storage configurations for GPU workloads,” said Mark Evans, WW product manager, HPE ProLiant for Microsoft Azure Stack Hub, at HPE. “We look forward to this collaboration that will help customers explore new workload options enabled by GPU capabilities.” 

As the leading cloud infrastructure provider1, Dell Technologies helps organizations remove cloud complexity and extend a consistent operating model across clouds. Working closely with Microsoft, the Dell EMC Integrated System for Azure Stack Hub will support additional GPU configurations, which include NVIDIA V100 Tensor Core GPUs, in a 2U form factor. This will provide customers increased performance density and workload flexibility for the growing predictive analytics and AI/ML markets. These new configurations also come with automated lifecycle management capabilities and exceptional support.

To participate in the Azure Stack Hub GPU preview, please send us an email today. 

Azure Stack Edge preview

Azure Stack Edge Hi-Res

We also announced the expansion of our Microsoft Azure Stack Edge preview with the NVIDIA T4 Tensor Core GPU. Azure Stack Edge is a cloud managed appliance that provides processing for fast local analysis and insights to the data. With the addition of an NVIDIA GPU, you’re able to build in the cloud then run at the edge. For more information about this exciting release please see the detailed blog.

GTC Digital

Microsoft session recordings will be available on the GTC Digital site starting March 26. You can find a list of the Microsoft digital sessions along with corresponding links in the Microsoft Tech Community blog here.


1 IDC WW Quarterly Cloud IT Infrastructure Tracker, Q3 2019, January 2020, Vendor Revenue

Microsoft is expanding the Azure Stack Edge with NVIDIA GPU preview

$
0
0

We’re expanding the Microsoft Azure Stack Edge with NVIDIA T4 Tensor Core GPU preview during the GPU Technology Conference (GTC Digital). Azure Stack Edge is a cloud-managed appliance that brings Azure’s compute, storage, and machine learning capabilities to the edge for fast local analysis and insights. With the included NVIDIA GPU, you can bring hardware acceleration to a diverse set of machine learning (ML) workloads.

What’s new with Azure Stack Edge

At Mobile World Congress in November 2019, we announced a preview of the NVIDIA GPU version of Azure Stack Edge and we’ve seen incredible interest in the months that followed. Customers in industries including retail, manufacturing, and public safety are using Azure Stack Edge to bring Azure capabilities into the physical world and unlock scenarios such as the real-time processing of video powered by Azure Machine Learning.

These past few months, we’ve taken our customers' feedback to make key improvements and are excited to make our preview available to even more customers today.

If you’re not already familiar with Azure Stack Edge, here are a few of the benefits:Azure Stack Edge Hi-Res

  • Azure Machine Learning: Build and train your model in the cloud, then deploy it to the edge for FPGA or GPU-accelerated inferencing.
  • Edge Compute: Run IoT, AI, and business applications in containers at your location. Use these to interact with your local systems, or to pre-process your data before it transfers to Azure.
  • Cloud Storage Gateway: Automatically transfer data between the local appliance and your Azure Storage account.  Azure Stack Edge caches the hottest data locally and speaks file and object protocols to your on-prem applications.
  • Azure-managed appliance: Easily order and manage Azure Stack Edge from the Azure Portal.  No initial capex fees; pay as you go, just like any other Azure service.

Enabling our partners to bring you world-class business applications

Equally important to bringing you a great device is enabling our partners to bring you innovative applications to meet your business needs.  We’d love to share some of the continued investment we’re making with partners to bring their exciting developments to you.malong_logo_e_l_h

As self-checkouts grow in prevalence, Malong Technologies is innovating in AI applications for loss prevention.

“For our customers in the retail industry, artificial intelligence innovation is happening at the edge,” said Matt Scott, co-founder and chief executive officer, Malong Technologies. “Along with our state-of-the-art solutions, our customers need hardware that is powerful, reliable, and custom-tailored for the cloud. Microsoft’s Azure Stack Edge fits the bill perfectly. We’re proud to be a Microsoft Gold Certified Partner, working with Microsoft to help our retail customers succeed.”

Increasing your manufacturing organization’s quality inspection accuracy is key to Mariner’s Spyglass Visual Inspection application.

Mariner“Mariner has standardized on Microsoft’s Azure Stack Edge for our Spyglass Visual Inspection and Spyglass Connected Factory products. These solutions are mission critical to our manufacturing customers. Azure Stack Edge provides the performance, stability and availability they require.” – Phil Morris, CEO, Mariner

Building computer vision solutions to improve performance and safety in manufacturing and other industries is a key area of innovation for XXII.

“XXII is thrilled to be a Microsoft partner and we are working together to provide our clients with real time video analysis software on edge with the Azure Stack Edge box. With this solution, LOGO XXII _ NOIR _ SMALLAzure allow us to harvest the full potential of NVIDIA GPUs directly on edge and be able to provide our clients in retail, industry and smart city with smart video analysis that are easily deployable, scalable and easily manageable with Azure stack Edge.” – Souheil Hanoune, Chief Scientific Officer, XXII

More to come with Azure Stack Edge

NVIDIA4There are even more exciting developments with Azure Stack Edge coming. We’re putting the final touches on much-awaited new compute and AI capabilities including virtual machines, Kubernetes clusters, and multi-node support. Along with these new features announced at Ignite 2019, Data Box Edge was renamed Azure Stack Edge to align with the Azure Stack portfolio.

Our Rugged series for sites with harsh or remote environments is also coming this year, including the battery-powered form-factor that can be carried in a backpack. The versatility of these Azure Stack Edge form-factors and cloud-managed capabilities brings cloud intelligence and compute to retail stores, factory floors, hospitals, field operations, disaster zones, and rescue operations.

Get started with the Azure Stack Edge with NVIDIA GPU preview

Thank you for continuing to partner with us as we bring new capabilities to Azure Stack Edge. We’re looking forward to hearing from you.

  • To get started with the preview, please email us and we’ll follow up to learn more about your scenarios.
  • Learn more about Azure Stack Edge.

Learn more about Azure’s Hybrid Strategy

Read about more updates from Azure during NVIDIA’s GTC.

Catch up on the latest .NET Productivity features

$
0
0

The Roslyn team continuously works to provide tooling that deeply understands the code you are writing in-order to help you be more productive. In this post, I’ll cover some of the latest .NET Productivity features available in Visual Studio 2019.

Tooling improvements

The feature that I’m most excited about is the new Go To Base command. Go To Base allows you to easily navigate up the inheritance chain. The command is available on the context (right-click) menu of the element that you want to navigate the inheritance hierarchy. Or you can press Alt+Home.

Go To Base

Find All References now categorizes the results by type and member. You can group by type and member in the Find All References window.

Find All References

Code fixes and refactorings

Code fixes and refactorings are the code suggestions the compiler provides through the light bulb and screwdriver icons. To trigger the Quick Actions and Refactorings menu, press (Ctrl+.) or (Alt+Enter). Below are the code fixes and refactorings that are new in Visual Studio 2019:

The extract local function refactoring allows you to turn a fragment of code from an existing method into a local function. Highlight the code that you want extracted. Press Ctrl+. to trigger the Quick Actions and Refactorings menu and select Extract local function.

Extract Local Function

The make members static code fix helps improve readability by making a non-static member static. Place your cursor on the member name. Press Ctrl+. to trigger the Quick Actions and Refactorings menu and select Make static.

Make Member Static

The simplify string interpolation refactoring simplifies string interpolations to be more legible and concise. Place your cursor on the string interpolation. Press Ctrl+. to trigger the Quick Actions and Refactorings menu and select Simplify interpolation.

Simplify String Interpolation

The convert if statements to switch statements or switch expressions refactoring enables an easy transition between if statements and switch statements or expressions. Press Ctrl+. to trigger the Quick Actions and Refactorings menu and select Convert to ‘switch’ statement or Convert to ‘switch’ expression.

Convert if to switch statement or expression

The make local function static refactoring allows you to make a local function static and pass in variables defined outside the function to the function’s declaration and calls. Place your cursor on the function name. Press Ctrl+. to trigger the Quick Actions and Refactorings menu and select Make local function ‘static’.

Make Local Function Static

The pass variable explicitly in a local static function refactoring gives you flexibility to define variables outside a context, but still be able to pass them in as arguments to the static local function. Place your cursor on the variable within the static local function. Press Ctrl+. to trigger the Quick Actions and Refactorings menu and select Pass variable explicitly in local static function.

Pass Variable Explicitly Static Local Function

The add null checks for all parameters refactoring saves you time by automatically adding if statements that check the nullity of all the nullable, non-checked parameters. Place your cursor on any parameter within the method. Press Ctrl+. to trigger the Quick Actions and Refactorings menu and select Add null checks for all parameters.

Add Null Checks for All Parameters

The introduce a local variable refactoring allows you to immediately generate a local variable to replace an existing expression. Highlight the expression that you want to assign to a new local variable. Press Ctrl+. to trigger the Quick Actions and Refactorings menu and select Introduce local for.

Introduce Local Variable

Get involved

This was just a sneak peak of what’s new in Visual Studio 2019. For a complete list of what’s new, see the release notes. And feel free to provide feedback on the Developer Community website, or using the Report a Problem tool in Visual Studio.

The post Catch up on the latest .NET Productivity features appeared first on .NET Blog.

Learning from our customers in the Greater China Region

.NET Core March 2020 Updates – 2.1.17 and 3.1.3

$
0
0

Today, we are releasing the .NET Core March 2020 Update. These updates only contain non-security fixes. See the individual release notes for details on updated packages.

NOTE: If you are a Visual Studio user, there are MSBuild version requirements so use only the .NET Core SDK supported for each Visual Studio version. Information needed to make this choice will be seen on the download page. If you use other development environments, we recommend using the latest SDK release.

Getting the Update

The latest .NET Core updates are available on the .NET Core download page. This update will be included in a future update of Visual Studio.

See the .NET Core release notes ( 2.1.17 | 3.1.3 ) for details on the release, including issues fixed and affected packages.

Docker Images

.NET Docker images have been updated for today’s release. The following repos have been updated.

Note: You must pull updated .NET Core container images to get this update, with either docker pull or docker build –pull.

The post .NET Core March 2020 Updates – 2.1.17 and 3.1.3 appeared first on .NET Blog.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>