Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Better security with enhanced access control experience in Azure Files

$
0
0

We are making it easier for customers to “lift and shift” applications to the cloud while maintaining the same security model used on-premises with the general availability of Azure Active Directory Domain Services (Azure AD DS) authentication for Azure Files. By integrating Azure AD DS, you can mount your Azure file share over SMB using Azure Active Directory (Azure AD) credentials from Azure AD DS domain joined Windows virtual machines (VMs) with NTFS access control lists (ACLs) enforced.

RefreshedDiagram-v2

Azure AD DS authentication for Azure Files allows users to specify granular permissions on shares, files, and folders. It unblocks common use cases like single writer and multi-reader scenario for your line of business applications. As the file permission assignment and enforcement experience matches that of NTFS, lifting and shifting your application into Azure is as easy as moving it to a new SMB file server. This also makes Azure Files an ideal shared storage solution for cloud-based services. For example, Windows Virtual Desktop recommends using Azure Files to host different user profiles and leverage Azure AD DS authentication for access control.

Since Azure Files strictly enforces NTFS discretionary access control lists (DACLs), you can use familiar tools like Robocopy to move data into an Azure file share persisting all of your important security control. Azure Files access control lists are also captured in Azure file share snapshots for backup and disaster recovery scenarios. This ensures that file access control lists are preserved on data recovery using services like Azure Backup that leverages file snapshots.

Follow the step-by-step guidance to get started today. To better understand the benefits and capabilities, you can refer to our overview Azure Azure AD DS authentication for Azure Files.

What’s new in general availability?

Based on your feedback, there are several new features to share since the preview:

Seamless integration with Windows File Explorer on permission assignments: When we demoed this feature at Microsoft Ignite 2018, we showed changing and view permissions with a Windows command line tool called icacls. There were clearly some challenges, since icacls is not easily discoverable or consistent with common user behavior. Starting with general availability, you can view or modify the permissions on a file or folder with Windows File Explorer, just like any regular file shares.

Integration with Windows File Explorer on permission assignments

New built-in role-based access controls to simplify share level access management: To simplify share-level access management, we have introduced three new built-in role-based access controls—Storage File Data SMB Share Elevated Contributor, Contributor, and Reader. Instead of creating custom roles, you can use the built-in roles for granting share-level permissions for SMB access to Azure Files.

What is next for Azure Files access control experience?

Supporting authentication with Azure Active Directory Domain Services is most useful for application lift and shift scenarios, but Azure Files can help with moving all on-premises file shares, regardless of whether they are providing storage for an application or for end users. Our team is working to extend authentication support to Windows Server Active Directory hosted on-premises or in the cloud.

If you are interested to hear future updates on Azure Files Active Directory Authentication, sign up today. For general feedback on Azure Files, email us at AzureFiles@microsoft.com.


Calling all .NET desktop and mobile developers!

$
0
0

We would love to hear about your experience with building client applications in .NET. Your feedback will greatly help us to improve the .NET tooling and ensure our roadmap focuses on your needs. Participate in shaping the future of the .NET client development by taking this short survey (5 minutes to complete).

We are also searching for developers to discuss new concepts and prototypes, so tell us in the survey if you would like .NET engineering team to reach out to you about upcoming opportunities in .NET UI development.

Take survey!

We really appreciate your input and will build our decision on the feedback we hear from you.

The post Calling all .NET desktop and mobile developers! appeared first on .NET Blog.

TraceProcessor 0.2.0

$
0
0

TraceProcessor version 0.2.0 is now available on NuGet with the following package ID:

Microsoft.Windows.EventTracing.Processing.All

This release contains minor feature additions and bug fixes since version 0.1.0. (A full changelog is below).

There are a couple of project settings we recommend using with TraceProcessor:

  1. We recommend running exes as 64-bit. The Visual Studio default for a new C# console application is Any CPU with Prefer 32-bit checked. Trace processing can be memory-intensive, especially with larger traces, and we recommend changing Platform target to x64 (or unchecking Prefer 32-bit) in exes that use TraceProcessor. To change these settings, see the Build tab under Properties for the project. To change these settings for all configurations, ensure that the Configuration dropdown is set to All Configurations, rather than the default of the current configuration only.
  2. We also suggest using NuGet with the newer-style PackageReference mode rather than the older packages.config mode. To change the default for new projects, see Tools, NuGet Package Manager, Package Manager Settings, Package Management, Default package management format.

TraceProcessor supports loading symbols and getting stacks from several data sources. The following console application looks at CPU samples and outputs the estimated duration that a specific function was running (based on the trace’s statistical sampling of CPU usage):


using Microsoft.Windows.EventTracing;
using Microsoft.Windows.EventTracing.Cpu;
using Microsoft.Windows.EventTracing.Symbols;
using System;
using System.Collections.Generic;

class Program
{
    static void Main(string[] args)
    {
        if (args.Length != 3)
        {
            Console.Error.WriteLine("Usage: GetCpuSampleDuration.exe <trace.etl> <imageName> <functionName>");
            return;
        }

        string tracePath = args[0];
        string imageName = args[1];
        string functionName = args[2];

        Dictionary<string, Duration> matchDurationByCommandLine = new Dictionary<string, Duration>();

        using (ITraceProcessor trace = TraceProcessor.Create(tracePath))
        {
            IPendingResult<ISymbolDataSource> pendingSymbolData = trace.UseSymbols();
            IPendingResult<ICpuSampleDataSource> pendingCpuSamplingData = trace.UseCpuSamplingData();

            trace.Process();

            ISymbolDataSource symbolData = pendingSymbolData.Result;
            ICpuSampleDataSource cpuSamplingData = pendingCpuSamplingData.Result;

            symbolData.LoadSymbolsForConsoleAsync(SymCachePath.Automatic, SymbolPath.Automatic).GetAwaiter().GetResult();
            Console.WriteLine();

            IThreadStackPattern pattern = AnalyzerThreadStackPattern.Parse($"{imageName}!{functionName}");

            foreach (ICpuSample sample in cpuSamplingData.Samples)
            {
                if (sample.Stack != null && sample.Stack.Matches(pattern))
                {
                    string commandLine = sample.Process.CommandLine;

                    if (!matchDurationByCommandLine.ContainsKey(commandLine))
                    {
                        matchDurationByCommandLine.Add(commandLine, Duration.Zero);
                    }

                    matchDurationByCommandLine[commandLine] += sample.Weight;
                }
            }
        }

        foreach (string commandLine in matchDurationByCommandLine.Keys)
        {
            Console.WriteLine($"{commandLine}: {matchDurationByCommandLine[commandLine]}");
        }
    }
}

Running this program produces output similar to the following:

C:GetCpuSampleDurationbinDebug> GetCpuSampleDuration.exe C:boot.etl user32.dll LoadImageInternal
0.0% (0 of 1165; 0 loaded)
<snip>
100.0% (1165 of 1165; 791 loaded)

wininit.exe: 15.99 ms
C:WindowsExplorer.EXE: 5 ms
winlogon.exe: 20.15 ms
“C:UsersAdminUACAppDataLocalMicrosoftOneDriveOneDrive.exe” /background: 2.09 ms

(Output details will vary depending on the trace).

Internally, TraceProcessor uses the SymCache format, which is a cache of some of the data stored in a PDB. When loading symbols, TraceProcessor requires specifying a location to use for these SymCache files (a SymCache path) and supports optionally specifying a SymbolPath to access PDBs. When a SymbolPath is provided, TraceProcessor will create SymCache files out of PDB files as needed, and subsequent processing of the same data can use the SymCache files directly for better performance.

The full changelog for version 0.2.0 is as follows:

Breaking Changes

  • Multiple Timestamp properties are now TraceTimestamp instead (which implicitly converts to the former Timestamp return type).
  • When a trace containing lost events is processed and AllowLostEvents was not set to true, a new TraceLostEventsException is thrown.

New Data Exposed

  • ISymbolDataSource now exposes Pdbs. This list contains every PDB that LoadSymbols is capable of loading for the trace.
  • IDiskActivity now exposes StorportDriverDiskServiceDuration and IORateData.
  • IMappedFileLifetime and IPageFileSectionLifetime now expose CreateStacks and DeleteStacks.
  • UseContextSwitchData and trace.UseReadyThreadData are now available individually rather than only as part of trace.UseCpuSchedulingData.
  • Last Branch Record (LBR) data has been added and is available via trace.UseLastBranchRecordData.
  • EventContext now provides access to original trace timestamp values.
  • IEnergyEstimationInterval now exposes ConsumerId.

Bug Fixes

  • A NullReferenceException inside of ICpuThreadActivity.WaitingDuration has been fixed.
  • An InvalidOperationException inside of Stack Tags has been fixed.
  • An InvalidOperationException inside of multiple file and registry path related properties has been fixed.
  • A handle leak inside of TraceProcessor.Create has been fixed.
  • A hang inside of ISymbolDataSource.LoadSymbolsAsync has been fixed.
  • Support for loading local PDB files and transcoding them into symcache files has been fixed.
  • Disks that were not mounted when the trace was recorded will now result in an IDisk that will throw on access to most properties instead of returning zeroes. Use the new IDisk.HasData property to check this condition before accessing these properties. This pattern is similar to how IPartition already functions.
  • A COMException in IDiskActivityDataSource.GetUsagePercentages has been fixed.

Other

  • IImageWeakKey has been deprecated as it can contain inaccurate data. IImage.Timestamp and IImage.Size should be used instead.
  • OriginalImageName has been deprecated as it can contain inaccurate data. IImage.OriginalFileName should be used instead.
  • Most primitive data types (Timestamp, FrequencyValue, etc) now implement IComparable.
  • A new setting, SuppressFirstTimeSetupMessage, has been added to TraceProcessorSettings. When set to true, the message regarding our first time setup process running will be suppressed.
  • SymbolPath and SymCachePath now include static helpers for generating commonly used paths.

As before, if you find these packages useful, we would love to hear from you, and we welcome your feedback. For questions using this package, you can post on StackOverflow with the tag .net-traceprocessing, and feedback can also be sent via email to traceprocessing@microsoft.com.

The post TraceProcessor 0.2.0 appeared first on Windows Developer Blog.

Announcing new AMD EPYC™-based Azure Virtual Machines

$
0
0

Microsoft is committed to giving our customers industry-leading performance for all their workloads. After being the first global cloud provider to announce the deployment of AMD EPYC™ based Azure Virtual Machines in 2017, we’ve been working together to continue bringing the latest innovation to enterprises.

Today, we are announcing our second-generation HB-series Azure Virtual Machines, HBv2, which features the latest AMD EPYC 7002 processor. Customers will be able to increase HPC performance and scalability to run materially larger workloads on Azure. We’ll also be bringing the AMD 7002 processors and Radeon Instinct GPUs to our family of cloud-based virtual desktops. Finally, our new Dav3 and Eav3-series Azure Virtual Machines, in preview today, provide more customer choice to meet a broad range of requirements for general purpose workloads using the new AMD EPYC™ 7452 processor.

Our growing Azure HPC offerings

Customers are choosing our Azure HPC offerings (HB-series) incorporating first generation AMD EPYC Naples for their performance and scalability. We’ve seen a 33 percent memory bandwidth advantage with EPYC, and that’s a key factor for many of our customers’ HPC workloads. For example, fluid dynamics is one workload in which this advantage is valuable. Azure has an increasing number of customers for whom this is a core part of their R&D and even production activities. On ANSYS Fluent, a widely used fluid dynamics application, we have measured EPYC-powered HB instances delivering a 54x performance improvement by scaling across nearly 6,000 processor cores. And this is 24 percent faster than a leading bare-metal solution with an identical InfiniBand network. Additionally, earlier this year, Azure became the first cloud to scale a tightly coupled HPC application to 10,000 cores. This is 10x higher than what had been previously possible on any other cloud provider. Azure customers will be among the first to take advantage of this capability to tackle the toughest challenges and innovate with purpose.

New HPC, general purpose, and memory optimized Azure Virtual Machines

Azure is continuing to increase its HPC capabilities, thanks in part to our collaboration with AMD. In preliminary benchmarking, HBv2 VMs featuring 120 CPUs from the second generation EPYC processor are demonstrating performance gains of over 100 percent on HPC workloads like fluid dynamics and automotive crash test analysis. HBv2 scalability limits are also increasing with the cloud’s first deployment of 200 Gigabit InfiniBand, thanks to the second generation EPYC processor’s PCIe 4.0 capability. HBv2 virtual machines (VMs) will support up to 36,000 cores for MPI workloads in a single virtual machine scale set, and up to 80,000 cores for our largest customers.

We’ll also be bringing AMD EPYC 7002 processor to our family of cloud-based remote desktops, pairing with the Radeon MI25 GPU for customers running Windows-based environments. The new series offers unprecedented GPU resourcing flexibility, giving customers more choice than ever before to size virtual machines all the way from 1/8th of a single GPU up to a whole GPU.

Finally, we are also announcing new Azure Virtual Machines as part of the Dv3 and Ev3-series—optimized for general purpose and memory intensive workloads. These new VM sizes feature AMD’s EPYC™ 7452 processor. The new general purpose Da_v3 and Das_v3 Azure Virtual Machines provide up to 64 vCPUs, 256 GiBs of RAM, and 1,600 GiBs of SSD-based temporary storage. Additionally, the new memory optimized Ea_v3 and Eas_v3 Azure Virtual Machines provide up to 64 vCPUs, 432 GiBs of RAM, and 1,600 GiBs of SSD-based temporary storage. Both VM series support Premium SSD disk storage. The new VMs are currently in preview in the East US Azure region and with availability coming soon to other regions.

Da_v3 and Das_v3 virtual machines can be used for a broad range of general-purpose applications. Example use cases include most enterprise-grade applications, relational databases, in-memory caching, and analytics. Applications that demand faster CPUs, better local disk performance or higher memories can also benefit from these new VMs. Additionally, the Ea_v3 and Eas_v3 VM series are optimized for other large in-memory business critical workloads.

Taking advantage of these new offerings

Using SkipToken for Paging in Asp.Net OData and Asp.Net Core OData

$
0
0

Loading large data can be slow. Services often rely on pagination to load the data incrementally to improve the response times and the user experience. Paging can be server-driven or client-driven:

Client-driven paging

In client-driven paging, the client decides how many records it wants to load and asks the server for that many records. That is achieved by using $skip and $top query options in conjunction. For instance, if a client needs to request 10 records from 71-80, it can send a similar request as below:

GET ~/Products/$skip=70&$top=10

However, this is problematic if the data is susceptible to change. In case of a deletion in between two requests for consecutive pages, a record will be served in both the requests.

Server-driven paging

In server-driven paging, the client asks for a collection of entities and the server sends back partial results as well as a nextlink to use to retrieve more results. The nextlink is an opaque link which may use $skiptoken to store state about the request, such as the last read entity.

Default NextLink Generation

Skiptoken is now available with Asp.NetCore OData >= 7.2.0 and Asp.Net OData >= 7.2.0. It can be enabled by calling SkipToken() extension method on HttpConfiguration.

configuration.MaxTop(10).Expand().Filter().OrderBy().SkipToken();

The default implementation of the skiptoken handling is encapsulated by a new class – DefaultSkipTokenHandler. This class implements the abstract methods of the base class SkipTokenHandler, which basically determines the format of the skiptoken value and how that value gets used while applying the SkipTokenQueryOption to the IQueryable.

Format of the nextlink

The nextlink may contain $skiptoken if the result needs to be paginated. In the default implementation, $skiptoken value will be a list of pairs, where the pair consists of a property name and property value separated by a delimiter(:). The orderby property and value pairs will be followed by key property and value pairs in the value for $skiptoken. Each property and value pair will be comma separated.

~/Products?$skiptoken=Id:27
~/Books?$skiptoken=ISBN:978-2-121-87758-1,CopyNumber:11
~/Products?$skiptoken=Id:25&$top=40
~/Products?$orderby=Name&$skiptoken=Name:'KitKat',Id:25&$top=40
~/Cars(id)/Colors?$skip=4

Applying SkipToken Query Option

The skiptoken value is parsed into a dictionary of property name and property-value pairs.  For each pair, we compose a predicate on top of the IQueryable to ensure that the resources returned have greater (or lesser in case of desc orderby) values than the last object encoded in the nextlink.

Custom NextLink Generation

The library provides you with a way to specify your own custom nextlink generation through dependency injection. The code below is delegating the responsibility of handling nextlink to a new class named SkipTopNextLinkGenerator by calling into the container builder extension method in MapODataServiceRoute.

configuration.MaxTop(10).Expand().Filter().OrderBy().SkipToken();

 configuration.MapODataServiceRoute("customskiptoken", "customskiptoken", builder =>
     builder.AddService(ServiceLifetime.Singleton, sp => EdmModel.GetEdmModel(configuration))
            .AddService<IEnumerable<IODataRoutingConvention>>(ServiceLifetime.Singleton, sp =>
               ODataRoutingConventions.CreateDefaultWithAttributeRouting("customskiptoken", configuration))
            .AddService<SkipTokenHandler, SkipTopNextLinkGenerator>(ServiceLifetime.Singleton));

Generating the NextLink

A nextlink can be generated by implementing the GenerateNextPageLink method in a derived class of SkipTokenHandler. The instance passed to this method will be the last object being serialized in the collection.

/// <summary>
        /// Returns the URI for NextPageLink
        /// </summary>
        /// <param name="baseUri">BaseUri for nextlink.</param>
        /// <param name="pageSize">Maximum number of records in the set of partial results for a resource.</param>
        /// <param name="instance">Instance based on which SkipToken value will be generated.</param>
        /// <param name="context">Serializer context</param>
        /// <returns>URI for the NextPageLink.</returns>
        public abstract Uri GenerateNextPageLink(Uri baseUri, int pageSize, Object instance, ODataSerializerContext context);

However, if your paging strategy does not align with this approach for all your use cases. You can set the nextlink in your controller method by returning a paged result. This will override the nextlink that is generated by your implementation of the SkipTokenHandler.

return new PageResult<Product>(https://myservice/odata/Entity?$skiptoken=myValue,
        results as IEnumerable<Product>, 
        yourCustomNextLink, 
        inLineCount);

 

Applying the SkipToken

In your custom nextlink generation, you are free to use the skiptoken in the nextlink to encode additional information that you may require. However, you will also have to implement how to use the SkipToken query option by implementing your own ApplyTo methods.

/// <summary>
        /// Apply the $skiptoken query to the given IQueryable.
        /// </summary>
        /// <param name="query">The original <see cref="IQueryable"/>.</param>
        /// <param name="skipTokenQueryOption">The query option that contains all the relevant information for applying skiptoken.</param>
        /// <returns>The new <see cref="IQueryable"/> after the skiptoken query has been applied to.</returns>
        public abstract IQueryable<T> ApplyTo<T>(IQueryable<T> query, SkipTokenQueryOption skipTokenQueryOption);

To implement your own ApplyTo, it may be useful to look at the DefaultSkipTokenHandler’s implementation.

The post Using SkipToken for Paging in Asp.Net OData and Asp.Net Core OData appeared first on OData.

Update on .NET Standard adoption

$
0
0

It’s about two years ago that I announced .NET Standard 2.0. Since then we’ve been working hard to increase the set of .NET Standard-based libraries for .NET. This includes many of the BCL components, such as the Windows Compatibility Pack, but also other popular libraries, such as the JSON.NET, the Azure SDK, or the AWS SDK. In this blog post, I’ll share some thoughts and numbers about the .NET ecosystem and .NET Standard.

Adoption by the numbers

In order to track adoption, we’re looking at nuget.org. On a regular interval, we check whether new package versions add support for .NET Standard. Once a package ID does, we stopped looking at future versions. This allows us to track when a package first adopted .NET Standard.

For the purposes of measuring adoption in the ecosystem, we’ve excluded all packages that represent the .NET platform (e.g. System.*) or were built by Microsoft, e.g. Microsoft.Azure.*. Of course, we track that too, but as part of pushing first parties to adopt .NET Standard.

This is what the adoption looks like:

  • On nuget.org:
    • 47% of the top one thousand packages support .NET Standard
    • 30% of all packages support .NET Standard (about 48k out of 160k packages)
  • Generously adding trendlines, we could expect ~100% by around 2022
    • Trendlines border on using a magic 8 ball, so take these figures with a big jar of salt.
    • We’ll likely never get to a 100% but it seems to suggest that we can expect maximum reach within the next two to three years, which seems realistic and is in line with our expectations.

What should I do?

With few exceptions, all libraries should be targeting .NET Standard. Exceptions include UI-only libraries (e.g. a WinForms control) or libraries that are just as building blocks inside of a single application.

In order to decide the version number, you can use the interactive version picker. But when in doubt, just start with .NET Standard 2.0. Even when .NET Standard 2.1 will be released later this year, most libraries should still be on .NET Standard 2.0. That’s because most libraries won’t need the API additions and .NET Framework will never be updated to support .NET Standard 2.1 or higher.

This recommendation is also reflected in the .NET library guidance we published earlier (taken from the cross-platform targeting section):

✔ DO start with including a netstandard2.0 target.

Most general-purpose libraries should not need APIs outside of .NET Standard 2.0. .NET Standard 2.0 is supported by all modern platforms and is the recommended way to support multiple platforms with one target.

There are some reasons why you may want to update to .NET Standard 2.1. The primary reasons would be:

  • Wide support for Span<T>
    • We’ve added various new methods across the BCL to support span-based APIs for writing allocation free code
  • New language features

Summary

.NET Standard adoption is already quite high, but it’s still growing. Please continue to update the packages you haven’t updated yet. And when creating new packages, continue to start with .NET Standard 2.0, even after .NET Standard 2.1 has shipped.

Happy coding!

The post Update on .NET Standard adoption appeared first on .NET Blog.

Microsoft brings best-in-class productivity apps and services to Samsung devices

Introducing NVv4 Azure Virtual Machines for GPU visualization workloads

$
0
0

Azure offers a wide variety of virtual machine (VM) sizes tailored to meet diverse customer needs. Our NV size family has been optimized for GPU-powered visualization workloads, such as CAD, gaming, and simulation. Today, our customers are using these VMs to power remote visualization services and virtual desktops in the cloud. While our existing NV size VMs work great to run graphics heavy visualization workloads, a common piece of feedback we receive from our customers is that for entry-level desktops in the cloud, only a fraction of the GPU resources is needed. Currently, the smallest sized GPU VM comes with one full GPU and more vCPU/RAM than a knowledge worker desktop requires in the cloud. For some customers, this is not a cost-effective configuration for entry-level scenarios.

Announcing NVv4 Azure Virtual Machines based on AMD EPYC 7002 processors and virtualized Radeon MI25 GPU.

The new NVv4 virtual machine series will be available for preview in the fall. NVv4 offers unprecedented GPU resourcing flexibility, giving customers more choice than ever before. Customers can select from VMs with a whole GPU all the way down to 1/8th of a GPU. This makes entry-level and low-intensity GPU workloads more cost-effective than ever before, while still giving customers the option to scale up to powerful full-GPU processing power.

NVv4 Virtual Machines support up to 32 vCPUs, 112GB of RAM, and 16 GB of GPU memory.

 

Size vCPU Memory GPU memory Azure network
Standard_NV4as_v4 4 14 GB 2 GB 50 Gbps
Standard_NV8as_v4 8 28 GB 4 GB 50 Gbps
Standard_NV16as_v4 16 56 GB 8 GB 50 Gbps
Standard_NV32as_v4 32 112 GB 16 GB 50 Gbps

With our hardware-based GPU virtualization solution built on top of AMD MxGPU and industry standard SR-IOV technology, customers can securely run workloads on virtual GPUs with dedicated GPU frame buffer. The new NVv4 Virtual Machines will also support Azure Premium SSD disks. NVv4 will have simultaneous multithreading (SMT) enabled for applications that can take advantage of additional vCPUs.

For customers looking to utilize GPU-powered VMs as part of the desktop as a service (DaaS) offering, Windows Virtual Desktop provides a comprehensive desktop and application virtualization service running in Azure. The new NVv4-series Virtual Machines will be supported by Windows Virtual Desktop as well as Azure Batch  for cloud-native batch processing.

Remote display application and protocols are key to a good end user experience with VDI/DaaS in the cloud. The new virtual machine series will work with Windows Remote Desktop (RDP) 10, Teradici PCoIP, and HDX 3D Pro. The AMD Radeon GPUs support DirectX 9 through 12, OpenGL 4.6, and Vulkan 1.1.

Customers can sign up for NVv4 access today by filling out this form. NVv4 Virtual Machines will initially be available later this year in the South Central US and West Europe Azure regions and will be available in additional regions soon thereafter.


Introducing the new HBv2 Azure Virtual Machines for high-performance computing

$
0
0

Announcing the second-generation HB-series Azure Virtual Machines for high-performance computing (HPC). HBv2 Virtual Machines are designed to deliver leadership-class performance, message passing interface (MPI) scalability, and cost efficiency for a variety of real-world HPC workloads.

HBv2 Virtual Machines feature 120 AMD EPYC™ 7002-series CPU cores, 480 GB of RAM, 480 MB of L3 cache, and no simultaneous multithreading (SMT). HBv2 Virtual Machines provide up to 350 GB/sec of memory bandwidth, which is 45-50 percent more than comparable x86 alternatives and three times faster than what most HPC customers have in their datacenters today.

Size CPU cores Memory: GB Memory per CPU Core: GB Local SSD: GiB RDMA network Azure network
Standard_HB120rs 120 480 GB 4 GB 1.6 TB 200 Gbps 40 Gbps

‘r’ denotes support for RDMA. ‘s’ denotes support for Premium SSD disks.

Each HBv2 virtual machine (VM) also features up to 4 teraFLOPS of double-precision performance, and up to 8 teraFLOPS of single-precision performance. This is a four times increase over our first generation of HB-series Virtual Machines, and substantially improves performance for applications demanding the fastest memory and leadership-class compute density.

Below are preliminary benchmarks on HBv2 across several common HPC applications and domains:

Relative performance bar graph

To drive optimal at-scale message passing interface (MPI) performance, HBv2 Virtual Machines feature 200 Gb/s HDR InfiniBand from our technology partners at Mellanox. The InfiniBand fabric backing HBv2 Virtual Machines is a non-blocking fat-tree with a low-diameter design for consistent, ultra-low latencies. Customers can use standard Mellanox/OFED drivers just as they would on a bare metal environment. HBv2 Virtual Machines officially support RDMA verbs and hence support all InfiniBand based MPIs, such as OpenMPI, MVAPICH2, Platform MPI, and Intel MPI. Customers can also leverage hardware offload of MPI collectives to realize additional performance, as well as efficiency gains for commercially licensed applications.

Across a single virtual machine scale set, customers can run a single MPI job on HBv2 Virtual Machines at up to 36,000 cores. For our largest customers, HBv2 Virtual Machines support up to 80,000 cores for single jobs.

Customers can also maximize the Ethernet interface of HBv2 Virtual Machines by using the SRIOV-based accelerated networking in Azure, which will yield up to 40 Gb/s of bandwidth, consistent, and low latencies.

Finally, the new H-series Virtual Machines feature local NVMe SSDs to deliver ultra-fast temporary storage for the full range of file sizes and I/O patterns. Using modern burst-buffer technologies like BeeGFS BeeOND, the new H-series Virtual Machines can deliver more than 900 GB/sec of peak injection I/O performance across a single virtual machine scale set. The new H-series Virtual Machines will also support Azure Premium SSD disks.

Customers can accelerate their HBv2 deployments with a variety resources optimized and pre-configured by the Azure HPC team. Our pre-built HPC image for CentOS is tuned for optimal performance and bundles key HPC tools like various MPI libraries, compilers, and more. The AzureHPC Project helps customers deploy an end-to-end Azure HPC environment reliably and quickly, and includes deployment scripts for setting up building blocks for networking, compute, schedulers, and storage. Also included is a growing list of tutorials for running HPC applications themselves.

For customers familiar with HPC schedulers and who would like to use these with HBv2 Virtual Machines, Azure CycleCloud is the simplest way to orchestrate autoscaling clusters. Azure CycleCloud supports schedulers such as Slurm, PBSPro, LSF, GridEngine, and HTCondor, and enables hybrid deployments for customers wishing to pair HBv2 Virtual Machines with their existing on-premises clusters. The new H-series Virtual Machines will also be supported by Azure Batch for cloud-native batch processing. HBv2 Virtual Machines will be available to all Azure platform partners.

Customers can sign up for HBv2 access today by filling out this form. HBv2 Virtual Machines will initially be available in the South Central US and West Europe Azure regions, with availability in additional regions soon thereafter.

Overcoming language difficulties with AI and Azure services

$
0
0

Ever hear the Abbot and Costello routine, “Who’s on first?” It’s a masterpiece of American English humor. But what if it we translated it into another language? With a word-by-word translation, most of what English speakers laugh at, would be lost. Such is the problem of machine translation (translation by computer algorithm.) If a business depends on words to have an impact on the user, then translation services need to be seriously evaluated for accuracy and effect. This is how Lionbridge approaches the entire world of language translation—but now they can harness the capabilities of artificial intelligence (AI). The result is to ensure the translations reach a higher bar.

The Azure platform offers a wealth of services for partners to enhance, extend and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Efficient partners for communication in life sciences

For those who deal in healthcare or life sciences, language should not be a barrier to finding the right information. The world of research and reporting is not limited to a few human languages. Life science organizations need to be able to find data from anywhere in the world. And for that, a translation service is needed that preserves not just the facts, but the effect of the original data. This is the goal of Lionbridge, a Microsoft partner dedicated to efficient translation.

In addition to localization, Lionbridge also serves as a guard against other dangers related to document handling. For example, there may be insufficient information provided to get a patient’s informed consent. Or a patient’s data can be disclosed by mistake. The penalties for any privacy violations can be steep. Having a third party whose sole business is to govern the documentation provides additional security against data mishandling.

The company can’t do this work on its own. It stresses a collaborative partnership approach to achieve the results needed. That begins with having fluency with human languages as well as with the technical domains. From their literature:

“Our team partners with yours to turn sensitive, complex, and frequently-changing content into words that resonate with every end user—from regulatory boards to care providers to patients—around the world. Our clients include pharmaceutical, medical device, medical publishing, and healthcare companies as well as Contract Research Organizations (CROs). Each demands strict attention to detail, expert understanding of nuanced requirements, and the utmost care for the end user.”

It comes as no surprise that Lionbridge depends on a host of skilled, professional translators—10,000 translators across 350 languages.

Specialized solutions

Due to the highly specialized service needs, the company operates as a consultant. After a meeting and evaluation of existing documentation and workflows, they will deliver a new workflow that includes technical services built on Azure. The company also creates a secure document exchange portal for managing translation into 350+ languages. The portal integrates with advanced workflow automation and AI powered translation. This advanced language technology enables far greater speed and volumes to be translated with increasing efficiency, opening up new languages, markets, and constituents for customers.

Lionbridge’s portal and translation management system have the appropriate controls in place in order to support a HIPAA-compliant workflow and are supported by globally distributed “Centers of Excellence.” The staff of the centers ensure adherence to ISO standards and are trained in supporting sensitive content, including personal health information (PHI).

The graphic shows the processes that are involved in creating a translation project. The project must first be defined. The project is then handed off to Lionbridge through their “Freeway Platform.” From there, it undergoes the translation process, with quality checks. The customer can see progress and results at a dashboard until the project is deemed complete.

A graphic showing the end-to-end workflow and processes that are involved in creating a translation project.

Azure services used in solution

  • Azure App Service is used as a compute resource to host applications and is valued for its automated scaling and proactive monitoring.
  • Azure SQL Database is appreciated for its automated backup, geo-replication, and failover features.
  • Azure Service Fabric supports the need for a microservices oriented platform.
  • Azure Storage (mostly blobs) is used in many applications, including for CDN purposes to allow users to access application content in many parts of the words with high speed.
  • Azure Cognitive Services is used by some applications to provide AI capabilities.

Next steps

To find out more, go to the Lionbridge offering on the Azure Marketplace and click Contact me.

To learn more about other healthcare solutions, go to the Azure for health page.

Azure Stream Analytics now supports MATCH_RECOGNIZE

$
0
0

MATCH_RECOGNIZE in Azure Stream Analytics significantly reduces the complexity and cost associated with building, modifying, and maintaining queries that match sequence of events for alerts or further data computation.

What is Azure Stream Analytics?

Azure Stream Analytics is a fully managed serverless PaaS offering on Azure that enables customers to analyze and process fast moving streams of data and deliver real-time insights for mission critical scenarios. Developers can use a simple SQL language, extensible to include custom code, in order to author and deploy powerful analytics processing logic that can scale-up and scale-out to deliver insights with milli-second latencies.

Traditional way to incorporate pattern matching in stream processing

Many customers use Azure Stream Analytics to continuously monitor massive amounts of data, detecting sequence of events and deriving alerts or aggregating data from those events. This in essence is pattern matching.

For pattern matching, customers traditionally relied on multiple joins, each one detecting a single event in particular. These joins are combined to find a sequence of events, compute results or create alerts. Developing queries for pattern matching is a complex process and very error prone, difficult to maintain and debug. Also, there are limitations when trying to express more complex patterns like Kleene Stars, Kleene Plus, or Wild Cards.

To address these issues and improve customer experience, Azure Stream Analytics provides a MATCH_RECOGNIZE clause to define patterns and compute values from the matched events. MATCH_RECOGNIZE clause increases user productivity as it is easy to read, write and maintain.

Typical scenario for MATCH_RECOGNIZE

Event matching is an important aspect of data stream processing. The ability to express and search for patterns in a data stream enable users to create simple yet powerful algorithms that can trigger alerts or compute values when a specific sequence of events is found.

An example scenario would be a food preparing facility with multiple cookers, each with its own temperature monitor. A shut down operation for a specific cooker need to be generated in case its temperature doubles within five minutes. In this case, the cooker must be shut down as temperature is increasing too rapidly and could either burn the food or cause a fire hazard.

Query
SELECT * INTO ShutDown from Temperature
MATCH_RECOGNIZE (
     LIMIT DURATION (minute, 5)
     PARTITON BY cookerId
     AFTER MATCH SKIP TO NEXT ROW
     MEASURES
         1 AS shouldShutDown
     PATTERN (temperature1 temperature2)
     DEFINE
         temperature1 AS temperature1.temp > 0,
         temperature2 AS temperature2.temp > 2 * MAX(temperature1.temp)
) AS T

In the example above, MATCH_RECOGNIZE defines a limit duration of five minutes, the measures to output when a match is found, the pattern to match and lastly how each pattern variable is defined. Once a match is found, an event containing the MEASURES values will be output into ShutDown. This match is partitioned over all the cookers by cookerId and are evaluated independently from one another.

MATCH_RECOGNIZE brings an easier way to express patterns matching, decreases the time spent on writing and maintaining pattern matching queries and enable richer scenarios that were practically impossible to write or debug before.

Get started with Azure Stream Analytics

Azure Stream Analytics enables the processing of fast-moving streams of data from IoT devices, applications, clickstreams, and other data streams in real-time. To get started, refer to the Azure Stream Analytics documentation.

Building Resilient ExpressRoute Connectivity for Business Continuity and Disaster Recovery

$
0
0

As more and more organizations adopt Azure for their business-critical workloads, the connectivity between organizations’ on-premises networks and Microsoft becomes crucial. ExpressRoute provides the private connectivity between on-premises networks and Microsoft. By default, an ExpressRoute circuit provides redundant network connections to Microsoft backbone network and is designed for carrier grade high availability. However, the high availability of a network connectivity is as good as the robustness of the weakest link in its end-to-end path. Therefore, it is imperative that the customer and the service provider segments of ExpressRoute connectivity are also architected for high availability.

Designing for high availability with ExpressRoute addresses these design considerations and talks about how to architect a robust end-to-end ExpressRoute connectivity between a customer on-premises network and Microsoft network core. The document addresses how to maximize high availability of an ExpressRoute in general, as well as components specific to Private peering and to Microsoft peering.

Private Peering High Availability

Each component of the ExpressRoute connectivity is key to build for high availability, including the first mile from on-premises to peering location, from multiple circuits to the same virtual network (VNet), and the virtual network gateway within the VNet.

To improve the availability of ExpressRoute virtual network gateway, Azure offers Zone-redundant virtual network gateways utilizing Availability Zones. ExpressRoute also supports Bidirectional Forwarding Detection (BFD) to expedite link failure detection and thereby significantly improving Mean Time To Recover (MTTR) following a link failure.

Microsoft Peering High Availability

Further, where and how you implement Network Address Translation (NAT) impacts MTTR of Microsoft PaaS services (including O365) consumed over Microsoft Peering following a connection failure. Path selection between the Internet and ExpressRoute on Microsoft Peering is also imperative to ensure a highly reliable and scalable architecture.

 

expressroute

ExpressRoute Disaster Recovery Strategy

How about architecting ExpressRoute connectivity for disaster recovery and business continuity? Would it be possible to optimize ExpressRoute circuits in different regions both for local connectivity and to act as a backup for another regional ExpressRoute failure?  In the following architecture, how do you ensure symmetrical cross-regional traffic flow either via Microsoft backbone or via the organization’s global connectivity (outside Microsoft)? Designing for disaster recovery with ExpressRoute private peering addresses these concerns and talks about how to architect for disaster recovery using ExpressRoute private peering.

expressroute2

Summary

To build a robust ExpressRoute circuit, end-to-end ExpressRoute connectivity should be architected for high availability that maximizes redundancy and minimizes MTTR following a failure. A robust ExpressRoute circuit can withstand many single-point failures. However, to safeguard against disasters that impact an entire peering location, your disaster recovery plans should include geo-redundant ExpressRoute circuits. Failing over to geo-redundant ExpressRoute circuits face challenges including asymmetrical routing. The following documents help you architect highly available ExpressRoute circuit and design for disaster recovery using geo-redundant ExpressRoute circuits.

 

Collaborating with our partners to help customers get the most out of Microsoft Managed Desktop

Visual Studio Code July 2019

Async loaded .NET projects may impact Visual Studio extensions

$
0
0

In Visual Studio 2019 version 16.3, the CSProj project system (C#/VB non-SDK style) introduces a new way of loading called Partial Load Mode (PLM). After the solution loads, the project system is doing design time builds in the background, leaving the UI responsive and interactive. However, for the time it takes to run the design time build, certain features may not be working as they used to. Extenders, read on.

Today, CSProj projects block the UI thread and wait for design time build and Roslyn initialization before firing the project load event. To further reduce solution load time, CSProj will now fire the project load event immediately after evaluation, since that is early enough to display the project tree in Solution Explorer and provide project and source code files to Roslyn.

Design time builds will happen on a background thread. This means that IntelliSense, code navigation, and designers will be in the Partial Load Mode after solution load and until the design time build results are ready. Most users will not even notice this happening beyond faster loading solutions.

This will match the current behavior of .NET SDK-style projects that has had this capability since Visual Studio 2017. Now the experience is consistent between CSProj and SDK-style projects.

Breaking change

Calls to Roslyn APIs, such as Workspace.CurrentSolution or ProjectItem.FileCodeModel, may return an incomplete code model in PLM because project references are not yet known to Roslyn. You may have to update your extension if it’s calling on the Roslyn API shortly after solution load.

Here’s how:

var operationProgressStatusService = await this.GetServiceAsync(typeof(SVsOperationProgressStatusService)) as IVsOperationProgressStatusService;
var stageStatus = operationProgressStatusService.GetStageStatus(CommonOperationProgressStageIds.Intellisense);

await stageStatus.WaitForCompletionAsync();

Learn more in the OperationProgress sample.

Editor owners should make an explicit decision regarding delaying the initialization of documents when IntelliSense is in progress.

To opt-out of deferring document creation, set the following in the .pkgdef file:

[$RootKey$Editors<Editor-type-Guid>]
"DeferUntilIntellisenseIsReady"=dword:00000000

To opt into deferring document creation (this is the current default behavior to avoid breaking compatibility with extensions depending on Roslyn data), set the following in the .pkgdef file:

[$RootKey$Editors<Editor-type-Guid>]
"DeferUntilIntellisenseIsReady"=dword:00000001

Test your extension

This change brings a feature currently used by SDK-style projects to the CSProj based ones. As such, it is unlikely going to cause issues for most extensions unless they have different code paths for each of the two project systems. We therefore regard this as low impact for the extension ecosystem, but it could have a big impact on an individual extension.

To find out if this change affected your extensions, download Visual Studio 2019 v16.3 Preview 1 today.

Then drop a .json file containing the below code into %LocalAppData%MicrosoftVisualStudioRemoteSettingsLocalTestPersistentActions

{
  "ActionPath": "vs\core\remotesettings",
  "ActionJson": {
    "FeatureFlags": {
      "CPS.UseOperationProgress": 0,
      "CSProj.PartialLoadMode": 1,
      "Designer.PartialLoadMode": 1,
      "Completion.PartialLoadMode": 1,
      "Roslyn.PartialLoadMode": 1
      }
  },

  "TriggerJson": null,
  "MaxWaitTimeSpan": "14.00:00:00",
  "Categories": [
  ]
}

Then restart Visual Studio twice. Yes, twice. This will enable PLM for CSProj based projects.

To revert the feature flags change – delete the .json file and restart Visual Studio twice. Disabling PLM is only an option in the initial preview of Visual Studio 2019 v16.3. The option will be removed in a future update.

The post Async loaded .NET projects may impact Visual Studio extensions appeared first on The Visual Studio Blog.


Game performance improvements in Visual Studio 2019 version 16.2

$
0
0

This spring Gratian Lup described in his blog post the improvements for C++ game development in Visual Studio 2019. From Visual Studio 2019 version 16.0 to Visual Studio 2019 version 16.2 we’ve made some more improvements. On the Infiltrator Demo we’ve got 2–3% performance wins for the most CPU-intensive parts of the game.

Throughput

A huge throughput improvement was done in the linker! Check our recent blogpost on Improved Linker Fundamentals in Visual Studio 2019.

New Optimizations

A comprehensive list of new and improved C++ compiler optimizations can be found in a recent blogpost on MSVC Backend Updates in Visual Studio 2019 version 16.2. I’ll talk in a bit more detail about some of them.

All samples below are compiled for x64 with these switches: /arch:AVX2 /O2 /fp:fast /c /Fa.

Vectorizing tiny perfect reduction loops on AVX

This is a common pattern for making sure that two vectors didn’t diverge too much:

#include <xmmintrin.h>
#include <DirectXMath.h>
uint32_t TestVectorsEqual(float* Vec0, float* Vec1, float Tolerance = 1e7f)
{
    float sum = 0.f;
    for (int32_t Component = 0; Component < 4; Component++)
    {
        float Diff = Vec0[Component] - Vec1[Component];
        sum += (Diff >= 0.0f) ? Diff : -Diff;
    }
    return (sum <= Tolerance) ? 1 : 0;
}

For version 16.2 we tweaked the vectorization heuristics for the AVX architecture to better utilize the hardware capabilities. The disassembly is for x64, AVX2, old code on the left, new on the right:

Comparison of old code versus the much-improved new code

Visual Studio 2019 version 16.0 recognized the loop as a reduction loop, didn’t vectorize it, but unrolled it completely. Version 16.2 also recognized the loop as a reduction loop, vectorized it (due to the heuristics change), and used horizontal add instructions to get the sum. As a result the code is much shorter and faster now.

Recognition of intrinsics working on a single vector element

The compiler now does a better job at optimizing vector intrinsics working on the lowest single element (those with ss/sd suffix).

A good example for the improved code is the inverse square root. This function is taken from the Unreal Engine math library (with comments removed for brevity). It’s used all over the games based on Unreal Engine for rendering objects:

#include <xmmintrin.h>
#include <DirectXMath.h>
float InvSqrt(float F)
{
    const __m128 fOneHalf = _mm_set_ss(0.5f);
    __m128 Y0, X0, X1, X2, FOver2;
    float temp;
    Y0 = _mm_set_ss(F);
    X0 = _mm_rsqrt_ss(Y0);
    FOver2 = _mm_mul_ss(Y0, fOneHalf);
    X1 = _mm_mul_ss(X0, X0);
    X1 = _mm_sub_ss(fOneHalf, _mm_mul_ss(FOver2, X1));
    X1 = _mm_add_ss(X0, _mm_mul_ss(X0, X1));
    X2 = _mm_mul_ss(X1, X1);
    X2 = _mm_sub_ss(fOneHalf, _mm_mul_ss(FOver2, X2));
    X2 = _mm_add_ss(X1, _mm_mul_ss(X1, X2));
    _mm_store_ss(&temp, X2);
    return temp;
}

Again, x64, AVX2, old code on the left, new on the right:

Comparison of old code versus the much-improved new code

Visual Studio 2019 version 16.0 generated code for all intrinsics one by one. Version 16.2 now understands the meaning of the intrinsics better and is able to combine multiply/add intrinsics into FMA instructions. There are still improvements to be made in this area and some are targeted for version 16.3/16.4.

Even now, if given a const argument, this code will be completely constant-folded:

float ReturnInvSqrt()
{
    return InvSqrt(4.0);
}

Comparison of old code versus the much-improved new code

Again, Visual Studio 2019 version 16.0 here generated code for all intrinsics, one by one. Version 16.2 was able to calculate the value at compile time. (This is done with /fp:fast switch only).

More FMA patterns

The compiler now generates FMA in more cases:

(fma a, b, (c * d)) + x -> fma a, b, (fma c, d, x)
x + (fma a, b, (c * d)) -> fma a, b, (fma c, d, x)

(a+1) * b -> fma a, b, b
(a+ (-1)) * b -> fma a, b, (-b)
(a – 1) * b -> fma a, b, (-b)
(a – (-1)) * b -> fma a, b, b
(1 – a) * b -> fma (-a), b, b
(-1 – a) * b -> fma (-a), b, -b

It also does more FMA simplifications:

fma a, c1, (a * c2) -> fmul a * (c1+c2)
fma (a * c1), c2, b -> fma a, c1*c2, b
fma a, 1, b -> a + b
fma a, -1, b -> (-a) + b -> b – a
fma -a, c, b -> fma a, -c, b
fma a, c, a -> a * (c+1)
fma a, c, (-a) -> a * (c-1)

Previously FMA generation worked only with local vectors. It was improved to work on globals too.

Here is an example of the optimization at work:

#include <xmmintrin.h>
__m128 Sample(__m128 A, __m128 B)
{
    const __m128 fMinusOne = _mm_set_ps1(-1.0f);
    __m128 X;
    X = _mm_sub_ps(A, fMinusOne);
    X = _mm_mul_ps(X, B);
    return X;
}

Old code on the left, new on the right:

Comparison of old code versus the much-improved new code

FMA is shorter and faster, and the constant is completely gone and will not occupy space.

Another sample:

#include <xmmintrin.h>
__m128 Sample2(__m128 A, __m128 B)
{
    __m128 C1 = _mm_set_ps(3.0, 3.0, 2.0, 1.0);
    __m128 C2 = _mm_set_ps(4.0, 4.0, 3.0, 2.0);
    __m128 X = _mm_mul_ps(A, C1);
    X = _mm_fmadd_ps(X, C2, B);
    return X;
}

Old code on the left, new on the right:

Comparison of old code versus the much-improved new code

Version 16.2 is doing this simplification:

fma (a * c1), c2, b -> fma a, c1*c2, b

Constants are now extracted and multiplied at compile time.

Memset and initialization

Memset code generation was improved by calling the faster CRT version where appropriate instead of expanding its definition inline. Loops that store a constant value that is formed of the same byte (e.g. 0xABABABAB) now also use the CRT version of memset. Compared with naïve code generation, calling memset is at least 2x faster on SSE2, and even faster on AVX2.

Inlining

We’ve done more tweaks to the inlining heuristics. They were modified to do more aggressive inlining of small functions containing control flow.

Improvements in Unreal Engine – Infiltrator Demo

The new optimizations pay off.

We ran the Infiltrator Demo again (see the blogpost about C++ game development in Visual Studio 2019 for a description of the demo and testing methodology). Short reminder: Infiltrator Demo is based on Unreal Engine and is a nice approximation of a real game. Game performance is measured here by frame time: the smaller, the better (opposite metric would be frames per second). Testing was done similarly to the previous test run, the only difference is the new hardware: this time we ran it on AMD Zen 2 newest processor.

Test PC configuration:

  • AMD64 Ryzen 5 3600 6-Core Processor, 3.6 Ghz, 6 Cores, 12 Logical processors
  • Radeon RX 550 GPU
  • 16 GB RAM
  • Windows 10 1903

Results

This time we measured only /arch:AVX2 configuration. As previously, the lower the better.

Graph showing the 2-3% improvements on performance spikes

The blue line is the demo compiled with Visual Studio 2019, the yellow line – compiled with Visual Studio 2019 version 16.2. X axis – time, Y axis – frame time.

Frame times are mostly the same between the two runs, but in the parts of the demo where frame times are the highest (and thus the frame rate is lowest) with Visual Studio 2019 version 16.2 we’ve got an improvement of 2–3%.

We’d love for you to download Visual Studio 2019 and give it a try. As always, we welcome your feedback. We can be reached via the comments below or via email (visualcpp@microsoft.com). If you encounter problems with Visual Studio or MSVC, or have a suggestion for us, please let us know through Help > Send Feedback > Report A Problem / Provide a Suggestion in the product, or via Developer Community. You can also find us on Twitter (@VisualC).

 

The post Game performance improvements in Visual Studio 2019 version 16.2 appeared first on C++ Team Blog.

Azure SignalR Service now supports Event Grid!

$
0
0

Azure SignalR Service now supports Event Grid!

Since we GA’ed Azure SignalR Service in last September, serverless has become a very popular use case in Azure SignalR Service and is used by many customers. Unlike the traditional SignalR application which requires a server to host the hub, in serverless scenario no server is needed, instead you can directly send messages to clients through REST APIs or our management SDK which can easily be used in serverless code like Azure Functions.

Though there is a huge benefit which saves you the cost of maintaining the app server, the feature set in serverless scenario is limited. Since there is no real hub, it’s not possible to respond to client activities like client invocations or connection events. Without client events serverless use cases will be limited and we hear a lot of customers asking about this support. Today we’re excited to announce a new feature that enables Azure SignalR Service to publish client events to Azure Event Grid so that you can subscribe and respond to them.

How does it work?

Let’s first revisit how serverless scenario in Azure SignalR Service works.

  1. In serverless scenario, even you don’t have an app server, you still need to have a negotiate API so SignalR client can do the negotiation to get the url to SignalR service and a corresponding access token. Usually this can be done using an Azure Function.

  2. Client will then use the url and access token to connect to SignalR service.

  3. After clients are connected, you can send message to clients using REST APIs or service management SDK. If you are using Azure Functions, our SignalR Service binding does the work for you so you only need to return the messages as an output binding.

This flow is illustrated as step 1-3 in the diagram below:

Serverless workflow

What’s missing here is that there is no equivalent of OnConnected() and OnDisconnected() in serverless APIs so there is no way for the Azure function to know whether a client is connected or disconnected.

Now with Event Grid you’ll be able to get such events through an Event Grid subscription (as step 4 and 5 in the above diagram):

  1. When a client is connected/disconnected to SignalR service, service will publish this event to Event Grid.

  2. In Azure function you can have an Event Grid trigger and subscribe to such events, then Event Grid will send those events to the function (through a webhook).

How to use it?

It’s very simple to make your serverless application subscribe to SignalR connection events. Let’s use Azure function as an example.

  1. First you need to make sure your SignalR Service instance is in serverless mode. (Create a SignalR Service instance if you haven’t done so.)

    Enable serverless mode

  2. Create an Event Grid trigger in your function app.

    Create Event Grid trigger

  3. In the Event Grid trigger, add an Event Grid subscription.

    Add Event Grid Subscription

    Then select your SignalR Service instance.

    Select SignalR Service instance

Now you’re all set! Your function app is now able to get connection events from SignalR Service.

To test it, you just need to open a SignalR connection to the service. You can use the SignalR client in our sample repo, which contains a simple negotiate API implementation.

  1. Clone AzureSignalR-samples repo.

  2. Start the sample negotiation server.

    cd samplesManagementNegotiationServer
    set Azure__SignalR__ConnectionString=<connection_string>
    dotnet run
    
  3. Run SignalR client.

    cd samplesManagementSignalRClient
    dotnet run
    

    Open the function logs in Azure portal and you’ll see a connected event is sent to the function:

    Azure Function output

    If you stop the client you’ll also see a disconnected event is received.

Try it now!

This feature is now in public preview so feel free to try it out and let us know your feedback by filing issues on Github.

For more information about how to use Event Grid with SignalR Service, you can read this article or try this sample.

The post Azure SignalR Service now supports Event Grid! appeared first on ASP.NET Blog.

Top Stories from the Microsoft DevOps Community – 2019.08.09

$
0
0

This week I had the privilege of participating in Microsoft training on “customer-driven engineering”. The training focused on the process of formulating and iterating over hypotheses of what features are the most useful for our customers based on their feedback. It was amazing to discover that all of our assumptions regarding requirements for a particular feature were incorrect by having just a few conversations with developers outside the company!

Even after we choose which features to invest in, the Azure DevOps team continues to iterate over the implementation based on customer feedback. It is the quick deployment cycle, and the short feedback loops enable us to do this effectively. As the 10th anniversary of the Continuous Delivery book is approaching, I reflect on how many professionals back then considered daily deployments unattainable. It is immensely rewarding to work on a tool that enables more organizations around the world to approach this ideal!

The blogs from this week cover CI/CD workflow improvements for engineers from different walks of life.

Get started integrating Dynatrace into your Azure DevOps release pipelines
This article by Rob Jahn demonstrates the integration between Azure Pipelines and Dynatrace. Rob shows an example of calling the Dynatrace API via a PowerShell script task in a Release Pipeline to create Information events for your CI/CD process, providing more context around your application lifecycle to your monitoring team! You can also find the script examples in Rob’s GitHub repo.

Showing VS Code Extension Test Outputs in Azure Pipelines
This short post from Aaron Powell covers generating test outputs for Azure Pipelines using Mocha and xUnit. With just a few lines of code, Mocha is set up to create a test report visually displayed by the build pipeline. Nice work, Aaron!

Getting Started with Azure Pipelines for Xamarin Developers
This post from Dan Siegel looks at using Azure Pipelines to deploy Xamarin applications. The post covers getting started with Azure DevOps and installing the marketplace extensions made for publishing mobile apps into Apple App Store and Google Play. The post then covers additional useful extensions, and configuring secrets for mobile application signing. Stay tuned for the future post from Dan on configuring the mobile app builds using YAML!

Understand Azure DevOps CLI
This post by Jaish Mathews explores the use of the relatively new Azure DevOps CLI extension which allows you to interact with Azure DevOps services using the command-line. The post covers what you need to install the extension and start exploring the available commands in az pipelines. After the installation, you can also explore the documentation for the CLI commands for repos, boards, and artifacts.

AzureDevOps: CICD for Azure Data Factory
This post by Jayendran Arumugam explores configuring CI/CD for Azure Data Factory using Azure DevOps. Although the experience is still a little rough around the edges, it allows you to automate the ADF deployments, bringing the DevOps mindset into the ETL realm. Thank you Jayendran for creating the walkthrough!

If you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.08.09 appeared first on Azure DevOps Blog.

The PICO-8 Virtual Fantasy Console is an idealized constrained modern day game maker

$
0
0

Animated GIF of PICO-8I love everything about PICO-8. It's a fantasy gaming console that wants you - and the kids in your life and everyone you know - to make games!

How cool is that?

You know the game Celeste? It's available on every platform, has one every award and is generally considered a modern-day classic. Well the first version was made on PICO-8 in 4 days as a hackathon project and you can play it here online. Here's the link when they launched in 4 years ago on the forums. They pushed the limits, as they call out "We used pretty much all our resources for this. 8186/8192 code, the entire spritemap, the entire map, and 63/64 sounds." How far could one go? Wolf3D even?

"A fantasy console is like a regular console, but without the inconvenience of actual hardware. PICO-8 has everything else that makes a console a console: machine specifications and display format, development tools, design culture, distribution platform, community and playership. It is similar to a retro game emulator, but for a machine that never existed. PICO-8's specifications and ecosystem are instead designed from scratch to produce something that has it's own identity and feels real. Instead of physical cartridges, programs made for PICO-8 are distributed on .png images that look like cartridges, complete with labels and a fixed 32k data capacity."

What a great start and great proof that you can make an amazing game in a small space. If you loved GameBoys and have fond memories of GBA and other small games, you'll love PICO-8.

How to play PICO-8 cartridges

Demon CastleIf you just want to explore, you can go to https://www.lexaloffle.com and just play in your browser! PICO-8 is a "fantasy console" that doesn't exist physically (unless you build one, more on that later). If you want to develop cartridges and play locally, you can buy the whole system (any platform) for $14.99, which I have.

If you have Windows and Chrome or New Edge you can just plug in your Xbox Controller with a micro-USB cable and visit https://www.lexaloffle.com/pico-8.php and start playing now! It's amazing - yes I know how it works but it's still amazing - to me to be able to play a game in a web browser using a game controller. I guess I'm easily impressed.

It wasn't very clear to me how to load and play any cartridge LOCALLY. For example, I can play Demon Castle here on the Forums but how do I play it locally and later, offline?

The easy way is to run PICO-8 and hit ESC to get their command line. Then I type LOAD #cartid where #cartid is literally the id of the cartridge on the forums. In the case of Demon Castle it's #demon_castle-0 so I can just LOAD #demon_castle-0 followed by RUN.

Alternatively - and this is just lovely - if I see the PNG pic of the cartridge on a web page, I can just save that PNG locally and save it in C:UsersscottAppDataRoamingpico-8carts then run it with LOAD demon_castle-0 (or I can include the full filename with extensions). THAT PNG ABOVE IS THE ACTUAL GAME AS WELL. What a clever thing - a true virtual cartridge.

One of the many genius parts of the PICO-8 is that the "Cartridges" are actually PNG pictures of cartridges. Drink that in for a second. They save a screenshot of the game while the cart is running, then they hide the actual code in a steganographic process - they are hiding the code in two of the bits of the color channels! Since the cart pics are 160*205 there's enough room for 32k.

A p8 file is source code and a p8.png is the compiled cart!

How to make PICO-8 games

The PICO-8 software includes everything you need - consciously constrained - to make AND play games. You hit ESC to move between the game and the game designer. It includes a sprite and music editor as well.

From their site, the specifications are TIGHT on purpose because constraints are fun. When I write for the PalmPilot back in the 90s I had just 4k of heap and it was the most fun I've had in years.

  • Display - 128x128 16 colours
  • Cartridge Size - 32k
  • Sound - 4 channel chip blerps
  • Code - Lua
  • Sprites - 256 8x8 sprites
  • Map - 128x32 cels

"The harsh limitations of PICO-8 are carefully chosen to be fun to work with, to encourage small but expressive designs, and to give cartridges made with PICO-8 their own particular look and feel."

The code you will use is LUA. Here's some demo code of a Hello World that animates 11 sprites and includes two lines of text

t = 0

music(0) -- play music from pattern 0

function _draw()
cls()
for i=1,11 do -- for each letter
for j=0,7 do -- for each rainbow trail part
t1 = t + i*4 - j*2 -- adjusted time
y = 45-j + cos(t1/50)*5 -- vertical position
pal(7, 14-j) -- remap colour from white
spr(16+i, 8+i*8, y) -- draw letter sprite
end
end

print("this is pico-8", 37, 70, 14)
print("nice to meet you", 34, 80, 12)
spr(1, 64-4, 90) -- draw heart sprite
t += 1
end

That's just a simple example, there's a huge forum with thousands of games and lots of folks happy to help you in this new world of game creation with the PICO-8. Here's a wonderful PICO-8 Cheat Sheet to print out with a list of functions and concepts. Maybe set it as your wallpaper while developing? There's a detailed User Manual and a 72 page PICO-8 Zine PDF which is really impressive!

And finally, be sure to bookmark this GitHub hosted amazing curated list of PICO-8 resources! https://github.com/pico-8/awesome-PICO-8

image image

Writing PICO-8 Code in another Editor

There is a 3 year old PICO-8 extension for Visual Studio Code that is a decent start, although it's created assuming a Mac, so if you are a Windows user, you will need to change the Keyboard Shortcuts to something like "Ctrl-Shift-Alt-R" to run cartridges. There's no debugger that I'm seeing. In an ideal world we'd use launch.json and have a registered PICO-8 type and that would make launching after changing code a lot clearer.

There is a more recent "pico8vscodeditor" extension by Steve Robbins that includes snippets for loops and some snippets for the Pico-8 API. I recommend this newer fleshed out extension - kudos Steve! Be sure to include the full path to your PICO-8 executable, and note that the hotkey to run is a chord, starting with "Ctrl-8" then "R."

Telling VS-Code about PICO-8

Editing code directly in the PICO-8 application is totally possible and you can truly develop an entire cart in there, but if you do, you're a better person than I. Here's a directory listing in VSCode on the left and PICO-8 on the right.

Directories in PICO-8

And some code.

Editing Pico-8 code

You can expert to HTML5 as well as binaries for Windows, Mac, and Linux. It's a full game maker! There are also other game systems out there like PicoLove that take PICO-8 in different directions and those are worth knowing about as well.

What about a physical PICO-8 Console

A number of folks have talked about the ultimate portable handheld PICO-8 device. I have done a lot of spelunking and as of this writing it doesn't exist.

  • You could get a Raspberry Pi Zero and put this Waveshare LCD hat on top. The screen is perfect. But the joystick and buttons...just aren't. There's also no sound by default. But $14 is a good start.
  • The Tiny GamePi15, also from Waveshare could be good with decent buttons but it has a 240x240 screen.
  • The full sized Game Hat looks promising and has a large 480x320 screen so you could play PICO-8 at a scaled 256x256.
  • The RetroStone is also close but you're truly on your own, compiling drivers yourself (twitter thread) from what I can gather
  • The ClockworkPI GameShell is SOOOO close but the screen is 320x240 which makes 128x128 an awkward scaled mess with aliasing, and the screen the Clockwork folks chose doesn't have a true grid if pixels. Their pixels are staggered. Hopefully they'll offer an alternative module one day, then this would truly be the perfect device. There are clear instructions on how to get going.
  • The PocketCHIP has a great screen but a nightmare input keyboard.

For now, any PC, Laptop, or Rasberry Pi with a proper setup will do just fine for you to explore the PICO-8 and the world of fantasy consoles!


Sponsor: OzCode is a magical debugging extension for C#/.NET devs working in Visual Studio. Get to the root cause of your bugs faster with heads-up display, advanced search inside objects, LINQ query debugging, side-by-side object comparisons & more. Try for free!


© 2019 Scott Hanselman. All rights reserved.
     

Rapidly develop blockchain solutions, but avoid the complexities

$
0
0

After first emerging as the basis for the Bitcoin protocol, blockchain has since gained momentum as a way to digitize business processes that extend beyond the boundaries of a single organization. While digital currencies use the shared ledger to track transactions and balances, enterprises are coming together to use the ledger in a different way. Smart contracts—codified versions of paper based agreements—enable multiple organizations to agree on terms that must be met for a transaction to be considered valid, empowering automated verification and workflows on the blockchain.

These digitized business processes, governed by smart contracts and powered by the immutability of blockchain, are poised to deliver the scalable trust today’s enterprises need. One Microsoft partner, SIMBA Chain, has created an offering that reduces the effort and time to start creating solutions using blockchain technology.

The Azure platform offers a wealth of services for partners to enhance, extend, and build industry solutions. Here we describe how one Microsoft partner uses Azure to solve a unique problem.

Simplifying blockchain app development

SIMBA stands for SIMpler Blockchain Applications. SIMBA Chain is a cloud based, Smart Contract as a Service (SCaaS) platform, enabling users with a variety of skill sets to build decentralized applications (dApps) and deploy to either iOS or Android.

The figure below shows the platform and the components (such as the Django web framework) used to communicate to a dApp using a pub/sub model. SIMBA Chain auto-generates the smart contract and API keys for deployment, and the app can be deployed to a number of backends for mobile apps (such as Android and iOS.) Communication to participate in the blockchain occurs through an API generated from a smart contract.

A graphic showing how SIMBA is used in blockchain app development.

With this platform, anyone with a powerful idea can build a decentralized application. SIMBA Chain supports Ethereum and will add more blockchain protocols to their platform.

A time-saving technology

SIMBA Chain’s user-friendly interface greatly reduces the time and custom code generation required to build and deploy a blockchain-based application. Users can create and model a business application, define the assets along with the smart contracts parameters, and in a few simple clicks the SIMBA platform generates an API which interfaces with the ledger. By reducing application development time, SIMBA enables faster prototyping, refinement, and deployment.

Recommended next steps

Go to the Azure Marketplace listing for SIMBA Chain and click Get It Now.

Learn more about Azure Blockchain Service.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>