Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Rerun activities inside your Azure Data Factory pipelines

$
0
0

Data Integration is complex with many moving parts. It helps organizations to combine data and complex business processes in hybrid data environments. Failures are very common in data integration workflows. This can happen due to data not arriving on time, functional code issues in your pipelines, infrastructure issues etc. A common requirement is ability to rerun failed activities inside your data integration workflows. In addition to this, sometimes, you want to rerun activities to re-process the data due to some error upstream in data processing. Azure Data Factory now allows you to rerun activities inside your pipelines. You can rerun the entire pipeline or choose to rerun downstream from a particular activity inside your data factory pipelines.

Simply navigate to the ‘Monitor’ section in data factory user experience, select your pipeline run, click ‘View activity runs’ under the ‘Action’ column, select the activity and click ‘Rerun from activity <activityname>’

Monitor pipeline

UX Demo

You can also view the rerun history for all your pipeline runs inside the data factory. Simply click on the toggle to ‘View All Rerun History’.

View all rerun history

You can also view rerun history for a particular pipeline run by clicking ‘View Rerun History’ under the ‘Actions’ column. This allows you to see the different run attempts that you have made for your pipeline execution.

Actions column

Improve online traffic sales

Learn more about rerunning activities inside your data factory pipelines.

Our goal is to continue adding features to improve the usability of Data Factory tools. Get started building pipelines easily and quickly using Azure Data Factory. If you have any feature requests or want to provide feedback, please visit the Azure Data Factory forum.


Announcing Azure DevOps Server 2019 RTW

Visual Studio 2019 for Mac Preview 3

Collecting .NET Core Linux Container CPU Traces from a Sidecar Container

$
0
0

Introduction

In recent years, containerization has gained popularity in DevOps due to its valuable capacities, including more efficient resource utilization and better agility. Microsoft and Docker have been working together to create a great experience for running .NET applications inside containers. See the following blog posts for more information:

When there’s a performance problem, analyzing the problem often requires detailed information about what was happening at the time. Perfcollect is the recommended tool for gathering .NET Core performance data on Linux. The .NET Team introduced EventPipe feature in .NET Core 2.0 and has been continuously improving the usability of the feature for the end users. The goal of EventPipe is to make it very easy to profile .NET Core applications.

However, currently EventPipe has limitations:

  • Only the managed parts of call stacks are collected. If a performance issue is in native code, or in the .NET Core runtime it can only trace it to the boundary.
  • it does not work for earlier versions of .NET Core.

In these cases perfcollect is still the preferred tools. Containers bring challenges in using perfcollect. There are several ways to use perfcollect to gather performance traces .NET Core application running in a Linux container, each has its cons:

  • Collecting from the host
    • Process Ids and file system in the host don’t match those in the containers.
    • perfcollect cannot find container’s files under host paths (what is /usr/share/dotnet/?)
    • Some container operating systems (for example, CoreOS) don’t support installing common packages/tools, for examples, linux-tools and lttng which are required by perfcollect tool.
  • Collecting from the container running the application.
    • Installation of profiling tools bloat the container and increase the attack surface.
    • Profiling affects the application performance in the same container (for example, its resource consumption is counted against quota).
    • perf tool needs capabilities to run from a container, which defeats the security features of containers.
  • Collecting from another “sidecar” container running on the same host.
    • Possible environment mismatches between sidecar container and application container.

Tim Gross published a blog post on debugging python containers in production. His approach is to run tools inside another (sidecar) container on the same host as the application container. The idea can be applied to profiling/debugging .NET Core Linux containers. This approach has the following benefits:

  • Application containers don’t need elevated privileges.
  • Application container images remain mostly unchanged. They are not bloated by tool packages that are not required to run applications.
  • Profiling doesn’t consume application container resources, which are usually throttled by a quota.
  • Sidecar container can be built as close to the application container as possible, so tools used by perfcollect, such as crossgen and objcopy, could operate on files of the same versions at the same paths, even they are in different containers.

This article only describes the manual/one-off performance investigation scenario. However, with additional effort the approach could work in an automated way and/or under an orchestrator. See this tutorial for an example on profiling with a sidecar container in a Kubernetes environment.

Note: tracing .NET Core events using LTTng is not supported in this sidecar approach due to how LTTng works (using shared memory and CPU buffer) so this approach cannot be used to collect events from the .NET Core runtime.

The rest of this doc gives a step-by-step guide of using a sidecar container to collect CPU trace of an ASP.NET application running in a Linux container.

Building Container Images

  1. We use a single Dockerfile and the multi-stage builds feature introduced in Docker 17.05 to build the application and sidecar container images.
    The sample project used is the output of dotnet new webapi. A dotnet restore step is used to download matching version of crossgen from nuget.org. This step is just a convenient way to download matching crossgen. It does adds time to the docker build process. If this becomes a concern, there are other ways to add crossgen too, for example, copying a pre-downloaded version from a cached location. However, we must ensure that the cached crossgen is from the same version of the .NET Core runtime because crossgen doesn’t always work properly across versions. In the future, the .NET team might make improvements in this area to make the experience better, for example, shipping a stable crossgen tool that works across different versions.In the example, the most important packages are:
    • linux-tools,
    • lttng-tools,
    • liblttng-ust-dev,
    • zip,
    • curl,
    • binutils (for objcopy/objdump commands),
    • procps (for ps command).

    The perfcollect script is downloaded and saved to /tools directory. Other tools (gdb/vim/emacs-nox/etc.) can be installed as needed for diagnosing and debugging purposes.

  2. Build the application image with the following command:
    user@host ~/project/webapi $ docker build . --target application -f Dockerfile -t application

  3. Build the sidecar image with the following command:
    user@host ~/project/webapi $ docker build . --target sidecar -f Dockerfile -t sidecar

Running Docker Containers

  1. Use a shared docker volume for /tmp.The Linux perf tool needs to access the perf*.map files that are generated by the .NET Core application. By default, containers are isolated thus the *.map files generated inside the application container are not visible to perf tool running inside of the sidecar container. We need to make these *.map files available to perf tool running inside the sidecar.In this example, a shared docker volume is mapped to the /tmp directory of both the application container and the sidecar container. Since both of their /tmp directories are backed by the same volume, the sidecar container can access files written into the tmp directory by the application container.Run the application container with a name (application in this example) since it’s easier to refer to the container using its name. Map the /tmp folder to a volume named shared-tmp. Docker will create the volume if it does not exist yet.
    user@host ~/project/webapi $ docker run -p 80:80 -v shared-tmp:/tmp --name application application

    Volume mount might not be desirable in some cases. Another option is to run the application container without the -v options and then use docker cp commands to copy the /tmp/perf*.map files from the running application container to the running sidecar container’s /tmp folder before starting the perfcollect tool.

  2. Run the sidecar using the pid and net namespaces of the application container, and with /tmp mapped to the same host folder for tmp. Give this container a name (sidecar in this example).Linux namespaces isolate containers and make resources they are using invisible to other containers by default, however we can make docker containers to share namespaces using the options like --pid, --net, etc. Here’s a wiki link to read more about Linux namespaces.The following command lets the sidecar container share the same pid and net namespaces with the application container so that it is allowed to debug or profile processes in the application container from the sidecar container. The --cap-add ALL --privileged switches grant the sidecar container permissions to collect performance traces.
    user@host ~/project/webapi $ docker run -it --pid=container:application --net=container:application -v shared-tmp:/tmp --cap-add ALL --privileged --name sidecar sidecar

Collection CPU Performance Traces

  1. Inside the sidecar container, collect CPU traces for the dotnet process (or your .NET Core application process if it is published as self-contained), which usually has PID of 1, but may vary depending on what else you are running in the application container before running the application.
    root@7eb78f190ed7:/tools# ps -aux

    Output should be similar to the following:

    USER        PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
    root          1  1.1  0.5 7511164 82576 pts/0   SLsl+ 18:25   0:03 dotnet webapi.dll
    root        104  0.0  0.0  18304  3332 pts/0    Ss   18:28   0:00 bash
    root        198  0.0  0.0  34424  2796 pts/0    R+   18:31   0:00 ps -aux

    In this example, the dotnet process has PID of 1 so when running the perfcollect script, pass the PID of the 1 to the -pid option.

    root@7eb78f190ed7:/tools# ./perfcollect collect sample -nolttng -pid 1

    By using the -pid 1 option perfcollect only captures performance data for the dotnet process. Remove the -pid option to collect performance data for all processes.

    Now generate some requests to the webapi service so that it is consuming CPU. This can be done manually using curl, or with load testing tool like Apache Benchmarking from another machine.

    user@another-host ~/test $ ab -n 200 -c 10 http://10.1.0.4/api/values

    Press Ctrl+C to stop collecting after the service has processed some requests.

  2. After collection is stopped, view the report using the following command:
    root@7eb78f190ed7:/tools# ./perfcollect view sample.trace.zip

  3. Verify that the trace includes the map files by listing contents in the zip file.
    root@7eb78f190ed7:/tools# unzip -l sample.trace.zip

    You should see perf-1.map and perfinfo-1.map in the zip, along with other *.maps files.

    If anything went wrong during the collection, check out perfcollect.log file inside the zip for more details.

    root@7eb78f190ed7:/tools# unzip sample.trace.zip sample.trace/perfcollect.log
    root@7eb78f190ed7:/tools# tail -100 sample.trace/perfcollect.log

    Messages like the following near the end of the log file indicate that you hit a known issue. Please check out the Potential Issues section for a workaround.

    Running /usr/bin/perf_4.9 script -i perf.data.merged -F comm,pid,tid,cpu,time,period,event,ip,sym,dso,trace &gt; perf.data.txt
    'trace' not valid for hardware events. Ignoring.
    'trace' not valid for software events. Ignoring.
    'trace' not valid for unknown events. Ignoring.
    'trace' not valid for unknown events. Ignoring.
    Samples for 'cpu-clock' event do not have CPU attribute set. Cannot print 'cpu' field.
    
    Running /usr/bin/perf_4.9 script -i perf.data.merged -f comm,pid,tid,cpu,time,event,ip,sym,dso,trace &gt; perf.data.txt
      Error: Couldn't find script `comm,pid,tid,cpu,time,event,ip,sym,dso,trace'
    
     See perf script -l for available scripts.

  4. On the host, retrieve the trace from the running sidecar container.
    user@host ~/project/webapi $ docker cp sidecar:/tools/sample.trace.zip ./

  5. Transfer the trace from the host machine to a Windows machine for further investigation using PerfView.PerfView supports analyzing perfcollect traces from Linux. Open sample.trace.zip then follow the usual workflow of working with PerfView.screenshot showing collected cpu traces from Linux being analyzed in PerfView
    For more information on analyzing CPU traces from Linux using PerfView, see this blog post and Channel 9 series by Vance Morrison.

Potential Issues

  • In some configurations, the collected cpu-clock events don’t have the cpu field. This causes a failure at the ./perfcollect collect step when the script tries to merge trace data. Here is a workaround:Open perfcollect in an editor, find the line that contains “-F” (capital F), then remove “cpu” from the $perfcmd lines:
    LogAppend "Running $perfcmd script -i $mergedFile -F comm,pid,tid,time,period,event,ip,sym,dso,trace > $outputDumpFile"
    $perfcmd script -i $mergedFile -F comm,pid,tid,time,period,event,ip,sym,dso,trace > $outputDumpFile 2>>$logFile
    LogAppend

    After applying the workaround and collecting the traces, be aware of a known PerfView issue when viewing the traces whose cpu field is missing. This issue has been fixed already and will be available in the future releases of PerfView.

  • If there are problems resolving .NET symbols, you can also use two additional settings. Note that this affects the application start-up performance.
    COMPlus_ZapDisable=1
    COMPlus_ReadyToRun=0

    Setting COMPlus_ZapDisabl=1 tells the .NET Core runtime to not use the precompiled framework code. All the code will be Just-in-Time compiled thus crossgen is no longer needed, which means the steps to run dotnet restore -r linux-x64 and copy crossgen in the Dockerfile can be removed. For more details, check out the relevant section at Performance Tracing on Linux.

Conclusion

This document describes a sidecar approach to collect CPU performance trace for .NET Core application running inside of a container. The step-by-step guide here describes a manual/on-demand investigation. However, most of steps above may be automated by container orchestrator or infrastructure.

References and Useful Links

  1. Linux Container Performance Analysis, talk by Brendan Gregg, inventor of FlameGraph
  2. Examples and hands-on labs for Linux tracing tools workshops by Sasha Goldshtein goldshtn/linux-tracing-workshop.
  3. Debugging and Profiling .NET Core Apps on Linux, slides from Sasha Goldshtein assets.ctfassets.net/9n3x4rtjlya6/1qV39g0tAEC2OSgok0QsQ6/fbfface3edac8da65fd380cc05a1a028/Sasha-Goldshtein_Debugging-and-profiling-NET-Core-apps-on-Linux.pdf
  4. Debugging Python Containers in Production http://blog.0x74696d.com/posts/debugging-python-containers-in-production
  5. perfcollect source code dotnet/corefx-tools:src/performance/perfcollect/perfcollect@master
  6. Documentation on Performance Tracing on Linux for .NET Core. dotnet/coreclr:Documentation/project-docs/linux-performance-tracing.md@master
  7. PerfView tutorials on Channel9 channel9.msdn.com/Series/PerfView-Tutorial

The post Collecting .NET Core Linux Container CPU Traces from a Sidecar Container appeared first on .NET Blog.

Floating-Point Parsing and Formatting improvements in .NET Core 3.0

$
0
0

Starting back with the .NET Core 2.1 release, we were making iterative improvements to the floating-point parsing and formatting code in .NET Core. Now, in .NET Core 3.0 Preview 3, we are nearing completion of this work and would like to share more details about these changes and some of the differences you might see in your applications.

The primary goals of this work were to ensure correctness and standards compliance with IEEE 754-2008. For those unfamiliar with the standard, it defines the underlying format, base operations, and behaviors for binary floating-point types such as System.Single (float) and System.Double (double). The majority of modern processors and programming languages support some version of this standard, so it is important to ensure it is implemented correctly. The standard does not impact the integer types, such as System.Int32 (int), nor does it impact the other floating-point types, such as System.Decimal (decimal).

Initial Work

We started with parsing changes, as part of the .NET Core 2.1 release. Initially, this was just an attempt to fix a perf difference between Windows and Unix and was done by @mazong1123 in dotnet/coreclr#12894, which implements the Dragon4 algorithm. @mazong1123 also made a follow up PR which improved perf even more by implementing the Grisu3 algorithm in dotnet/coreclr#14646. However, in reviewing the code we determined that existing infrastructure had a number of issues that prevented us from always doing the right thing and that it would require significantly more work to make correct.

Porting to C#

The first step in fixing these underlying infrastructure issues was porting the code from native to managed. We did this work in dotnet/coreclr#19999 and dotnet/coreclr#20080. The result was we made the code more portable, allowed it to be shared with the other frameworks/runtimes (such as Mono and CoreRT), made it possible to easily debug the code with the .NET Debugger, and made it available through SourceLink.

Making the parser IEEE compliant

We did some additional cleanup in dotnet/coreclr#20619 by removing various bits of duplicated code that was shared between the different parsers. Finally, we made the double and float parsing logic mostly IEEE compliant in dotnet/coreclr#20707 and this made available in the first .NET Core 3.0 Preview.

These changes fixed three primary issues:

The fixes ensured that double.Parse/float.Parse would return the same result as the C#, VB, or F# compiler for the corresponding literal value. Producing the same result as the language compilers is important for determinism of runtime and compile time expressions. Up until the change, this was not the case.

To elaborate, every floating-point value requires between 1 and X significant digits (i.e. all digits that are not leading or trailing zeros) in order to roundtrip the value (that is, in order for double.Parse(value.ToString()) to return exactly value). This is at most 17 digits for double and at most 9 digits for float. However, this only applies to strings that are first formatted from an existing floating-point value. When parsing from an arbitrary string, you may have to instead consider up to Y digits to ensure that you produce the “nearest” representable value. This is 768 digits for double and 113 digits for float. We have tests validating such strings parse correctly in RealParserTestsBase.netcoreapp.cs and dotnet/corefx#35701. More details on this can be found on Rick Regan’s Exploring Binary blog.

An example of such a string would be for double.Epsilon (which is the smallest value that is greater than zero). The shortest roundtrippable string for this value is only 5e-324, but the exact string (i.e. the string that contains all significant digits available in the underlying value) for this value is exactly 1074 digits long, which is comprised of 323 leading zeros and 751 significant digits. You then need one additional digit to ensure that the string is rounded in the correct direction (should it be exactly double.Epsilon or the smallest value that is greater than double.Epsilon).

Some additional minor cleanup was done in dotnet/coreclr#21036 to ensure that the remaining compliance issues were resolved. These ended up mostly about ensuring we handle Infinityand NaN case-insensitively and that we allowed an optional preceding sign.

Making the formatter IEEE 754-2008 compliant

The formatting code required more significant changes and was primarily done in dotnet/coreclr#22040 with some followup work fixing some remaining issues in dotnet/coreclr#22522.

These changes fixed 5 primary issues:

These changes are expected to have the largest potential impact to existing code.

The summary of these changes is that (for double/float):

  • ToString(), ToString("G"), and ToString("R") will now return the shortest roundtrippable string. This ensures that users end up with something that just works by default. An example of where it was problematic was Math.PI.ToString() where the string that was previously being returned (for ToString() and ToString("G")) was 3.14159265358979; instead, it should have returned 3.1415926535897931. The previous result, when parsed, returned a value which was internally off by 7 ULP (units in last place) from the actual value of Math.PI. This meant that it was very easy for users to get into a scenario where they would accidentally lose some precision on a floating-point value when the needed to serialize/deserialize it.
  • For the "G" format specifier that takes a precision (e.g. G3), the precision specifier is now always respected. For double with precisions less than 15 (inclusive) and for float with precisions less than 6 (inclusive) this means you get the same string as before. For precisions greater than that, you will get up to that many significant digits, provided those digits are available (i.e. (1.0).ToString("G17") will still return 1 since the exact string only has one significant digit; but Math.PI.ToString("G20") will now return 3.141592653589793116, since the exact string contains at least 20 significant digits).
  • For the "C", "E", "F", "N", and "P" format specifiers the changes are similar. The difference is that these format specifiers treat the precision as the number of digits after the decimal point, in contrast to "G" which treats it as the number of significant digits. The previous implementation had a bug where, for strings that contained more than 15 significant digits, it would actually fill in the remaining digits with zero, regardless of whether they appeared before or after the decimal point. As an example, (1844674407370955.25).ToString("F4")would previously return 1844674407370960.0000. The exact string, however, actually contains enough information to fill all the integral digits. With the changes made we instead fill out the available integral digits while still respecting the request for the 4 digits after the decimal point and instead return 1844674407370955.2500.
  • For custom format strings, they have the same behavior as before and will only print up to 15 significant digits, regardless of how many are requested. Fixing this to support an arbitrary number of digits would require more work to support and hasn’t been done at this time.

Potential impact to existing code

When picking up .NET Core 3.0, it is expected that you may encounter some of the differences described in this post in your application or library code. The general recommendation is that the code be updated to handle these changes. However, this may not be possible in all cases and a workaround may be required. Focused testing for floating-point specific code is recommended.

For differences in parsing, there is no mechanism to fallback to the old behavior. There were already differences across various operating systems (i.e. Linux, Windows, macOS, etc) and architectures (i.e. x86, x64, ARM, ARM64, etc). The new logic makes all of these consistent and ensures that the result returned is consistent with the corresponding language literal.

For differences in formatting, you can get the equivalent behavior by:

  • For ToString() and ToString("G") you can use G15 as the format specifier as this is what the previous logic would do internally.
  • For ToString("R"), there is no mechanism to fallback to the old behavior. The previous behavior would first try “G15” and then using the internal buffer would see if it roundtrips; if that failed, it would instead return “G17”.
  • For the "G" format-specifier that takes a precision, you can force precisions greater than 15 (exclusive) to be exactly 17. For example, if your code is doing ToString("G20") you can instead change this to ToString("G17").
  • For the remaining format-specifiers that take a precision ("C", "E", "F", "N", and "P"), there is no mechanism to fallback to the old behavior. The previous behavior would clamp precisions greater than 14 (exclusive) to be 17 for "E" and 15 for the others. However, this only impacted the significant digits that would be displayed, the remaining digits (even if available) would be filled in as zero.

The post Floating-Point Parsing and Formatting improvements in .NET Core 3.0 appeared first on .NET Blog.

Microsoft opens first datacenters in Africa with general availability of Microsoft Azure

$
0
0

AzureMap_SouthAfrica_GA-final

Today, I am pleased to announce the general availability of Microsoft Azure from our new cloud regions in Cape Town and Johannesburg, South Africa. Nedbank, Peace Parks Foundation, and eThekwini water are just a few of the organizations in Africa leveraging Microsoft cloud services today and will benefit from the increased computing resources and connectivity from our new cloud regions.

The launch of these regions marks a major milestone for Microsoft as we open our first enterprise-grade datacenters in Africa, becoming the first global provider to deliver cloud services from datacenters on the continent. The new regions provide the latest example of our ongoing investment to help enable digital transformation and advance technologies such as AI, cloud, and edge computing across Africa.

By delivering the comprehensive Microsoft Cloud — comprising Azure, Office 365, and Dynamics 365 — from datacenters in a given geography, we offer scalable, available, and resilient cloud services to companies and organizations while meeting data residency, security, and compliance needs. We have deep expertise in protecting data and empowering customers around the globe to meet extensive security and privacy requirements, including offering the broadest set of compliance certifications and attestations in the industry.

With 54 regions announced worldwide, more than any other cloud provider, Microsoft’s global cloud infrastructure will connect the new regions in South Africa with greater business opportunity, help accelerate new global investment, and improve access to cloud and Internet services across Africa.

Accelerating digital transformation in Africa

As we execute our expansion strategy, we consider the demand for locally delivered cloud services and the opportunity for digital transformation in the market. According to a study from IDC, spending on public cloud services in South Africa will nearly triple over the next five years, and the adoption of cloud services will generate nearly 112,000 net-new jobs in South Africa by the end of 2022. The increased utilization of public cloud services and the additional investments into private and hybrid cloud solutions will enable organizations in South Africa to focus on innovation and building digital businesses at scale.

Nedbank, a leading African bank that services a diverse client base in South Africa and the rest of Africa, is pursuing a transformation strategy with the Azure cloud platform to enable its digital aspirations. Microsoft has had a long relationship with Nedbank which has culminated in enabling its migration to the cloud to help increase its competitiveness, agility, and customer focus. Azure also provides compliance technologies that assist Nedbank to increase data privacy and security which are primary concerns of its customers, regulators, and investors. Nedbank has adopted a hybrid and multi-vendor cloud strategy in which Microsoft is an integral partner.

The Peace Parks Foundation, in collaboration with Cloudlogic, uses Azure to rapidly deploy infrastructure and solutions in far-flung protected spaces as well as to compute a considerable volume of data around at-risk species and wildlife in multiple conservation areas spanning thousands of kilometers. In efforts to sustain the delicate ecosystem and keystone species, such as the black and white rhinoceros, Peace Parks Foundation processes up to tens of thousands of images captured monthly on wildlife cameras in remote areas to monitor possible poaching activity. In the future, Peace Parks will leverage the new cloud infrastructure for radio over Internet protocol, a high-tech solution to a low-tech problem, to improve radio communication over remote and isolated areas.

eThekwini water is a unit of the eThekwini Municipality in Durban, South Africa responsible for the provision of water and sanitation services critical for sustaining life for 3.5 million residents in a 2,000+ square kilometer service area. In partnership with Cloudlogic, eThekwini water is using Azure for critical application monitoring as well as site failover and disaster recovery initiatives. It’ll benefit from locally delivered cloud services to improve performance of real-time reporting and monitoring of water infrastructure 24 hours a day, seven days a week.

Empowering people and organizations across Africa

Microsoft has long been working to support organizations, local start ups, and NGOs in Africa that have the potential to solve some of the biggest problems facing humanity, such as the scarcity of water and food as well as economic and environmental sustainability.

In 2013, we launched Microsoft 4Afrika investing in start-ups, partners, small-to-medium enterprises, governments, and youth on the African continent. The program is focused on delivering affordable access to the Internet, developing skilled workforces, and investing in local technology solutions. Africa has the potential to help lead the technology revolution; therefore, Microsoft is empowering organizations and people to drive economic development, inclusive growth, and digital transformation. 4Afrika is Microsoft’s business and market development engine on the continent, which is preparing the market to embrace cloud technology.

We have also extended FarmBeats, an end-to-end approach to help farmers benefit from technology innovation at the edge, to Nairobi, Kenya. FarmBeats strives to enable data-driven farming as we believe that data, coupled with the farmer’s knowledge and intuition about his or her farm, can help increase farm productivity and reduce costs. The new effort in Nairobi will be focused on addressing the specific challenges of farming in Africa with the intent of expanding to other African countries.

Bringing the complete cloud to Africa

The new cloud regions in Africa are connected with Microsoft’s other regions via our global network, one of the largest and most innovative on the planet, which spans more than 100,000 miles (161,000 kilometers) of terrestrial fiber and subsea cable systems to deliver services to customers. We’ve expanded our network footprint to reach Egypt, Kenya, Nigeria, and South Africa and will be expanding to Angola. Microsoft is bringing the global cloud closer to home for African organizations and citizens through our trans-Arabian paths between India and Europe, as well as our trans-Atlantic systems including Marea, the highest-capacity cable to ever cross the Atlantic.

Azure is the first of Microsoft’s intelligent cloud services to be delivered from the new datacenters in South Africa. Office 365, Microsoft’s cloud-based productivity solution, is anticipated to be available by the third quarter of calendar year 2019, and Dynamics 365, the next generation of intelligent business applications, is anticipated for the fourth quarter.

Follow these links to learn more about the new cloud services in South Africa and the availability of Azure regions and services across the globe.

Real-time serverless applications with the SignalR Service bindings in Azure Functions

$
0
0

Since our public preview announcement at Microsoft Ignite 2018, every month thousands of developers worldwide have leveraged the Azure SignalR Service bindings for Azure Functions to add real-time capabilities to their serverless applications. Today, we are excited to announce the general availability of these bindings in all global regions where Azure SignalR Service is available!

SignalR Service is a fully managed Azure service that simplifies the process of adding real-time web functionality to applications over HTTP. This real-time functionality allows the service to push messages and content updates to connected clients using technologies such as WebSocket. As a result, clients are updated without the need to poll the server or submit new HTTP requests for updates.

20190306-real-time-app-gif

Azure Functions provides a productive programming model based on triggers and bindings for accelerated development and serverless hosting of event-driven applications. It enables developers to build apps using the programming languages and tools of their choice, with an end-to-end developer experience that spans from building and debugging locally, to deploying and monitoring in the cloud. Combining Azure SignalR Service with Azure Functions using these bindings, you can easily push updates to the UI of your applications with just a few lines of code. The source of those updates can be data coming from different Azure services, or any service able to communicate over HTTP, thanks to the triggers supported in Azure Functions that will start the execution of a script responding to an event.

A common scenario worth mentioning is updating the UI of an application based on modifications made on the database. Using a combination of Cosmos DB change feed, Azure Functions, and SignalR Service, you can automate these UI updates in real-time with just a few lines of code for registering the client that will receive those updates and pushing the updates themselves. This fully managed experience is a great fit for event-driven scenarios and enables the creation of serverless backends and applications with real-time capabilities, reducing development time and operations overhead.

20190306-real-time-app-graphic

Using the Azure SignalR Service bindings for Azure Functions you will be able to:

  • Use SignalR Service without dependency on any application server for a fully managed, serverless experience.
  • Build serverless real-time applications using all Azure Functions generally available languages: JavaScript, C#, and Java.
  • Leverage the SignalR Service bindings with all event triggers supported by Azure Functions to push messages to connected clients in real-time.
  • Use App Service Authentication with SignalR Service and Azure Functions for improved security and out-of-the-box, fully managed authentication.

Next steps

We'd like to hear about your feedback and comments. You can reach the product team on the GitHub repo, or by email

Azure Databricks – New capabilities at lower cost

$
0
0

Azure Databricks provides a fast, easy, and collaborative Apache Spark™-based analytics platform to accelerate and simplify the process of building big data and AI solutions backed by industry leading SLAs.

With Azure Databricks, customers can set up an optimized Apache Spark environment in minutes. Data scientists and data engineers can collaborate using an interactive workspace with languages and tools of their choice. Native integration with Azure Active Directory (Azure AD) and other Azure services enables customers to build end-to-end modern data warehouse, machine learning and real-time analytics solutions.

We have seen tremendous adoption of Azure Databricks and today we are excited to announce new capabilities that we are bringing to market.

General availability of Data Engineering Light

Customers can now get started with Azure Databricks with a new low-priced workload called Data Engineering Light that enables customers to run batch applications on managed Apache Spark. It is meant for simple, non-critical workloads that don’t need the performance, autoscaling, and other benefits provided by Data Engineering and Data Analytics workloads. Get started with this new workload.

Additionally, we have reduced the price for the Data Engineering workload across both the Standard and Premium SKUs. Both the SKUs are now available at up to 25 percent lower cost. To check out the new pricing for Azure Databricks SKUs, visit the pricing page.

Preview of managed MLflow

MLflow is an open source framework for managing the machine learning lifecycle. With managed MLflow, customers can access it natively from their Azure Databricks environment and leverage Azure Active Directory for authentication. With managed MLflow on Azure Databricks customers can:

  • Track experiments by automatically recording parameters, results, code, and data to an out-of-the-box hosted MLflow tracking server. Runs can now be organized in experiments from within the Azure Databricks, and results can be queried from within the Azure Databricks notebooks to identify the best performing models.
  • Package machine learning code and dependencies locally in a reproducible project format and execute remotely on a Databricks cluster.
  • Quickly deploy models to production.

Learn more about managed MLFlow.

Machine learning on Azure with Azure Machine Learning and Azure Databricks

Since the general availability of Azure Machine Learning service (AML) in December 2018, and its integration with Azure Databricks, we have received overwhelmingly positive feedback from customers who are using this combination to accelerate machine learning on big data. Azure Machine Learning complements the Azure Databricks experience by:

  • Unlocking advanced automated machine learning capability which enables data scientists of all skill levels to identify suitable algorithms and hyperparameters faster.
  • Enabling DevOps for machine learning for simplified management, monitoring, and updating of machine learning models.
  • Deploying models from the cloud and the edge.
  • Providing a central registry for experiments, machine learning pipelines, and models that are being created across the organization.

The combination of Azure Databricks and Azure Machine Learning makes Azure the best cloud for machine learning. Customers benefit from an optimized, autoscaling Apache Spark based environment, an interactive collaborate workspace, automated machine learning, and end-to-end Machine Learning Lifecycle management.

Machine Learning on Azure workflow

Get started today!

Try Azure Databricks and let us know your feedback!


Updated Microsoft Store App Developer Agreement: New Revenue Share

$
0
0

As of March 5th, the Microsoft Store team has updated the Microsoft Store App Developer Agreement (ADA). The next time you log into your Partner Center dashboard, you or someone with an administrative role on your account will be prompted to reaccept the ADA before you can continue to update or manage your applications.

The updated ADA includes the new Microsoft Store fee structure that delivers up to 95 percent of the revenue back to consumer app developers. To ensure you receive the full 95 percent revenue, be sure to instrument your referring traffic URLs with a CID.

When Microsoft delivers a customer through other methods (tracked by an OCID), such as when the customer discovers the app in a Microsoft Store collection, through Microsoft Store search, or through any other Microsoft-owned properties, then you will receive 85 percent of the revenue from that purchase.

When there is no CID or OCID attributed to a purchase, in the instance of a web search, you will receive 95 percent revenue.

The new fee structure is applicable to app purchases made on all Windows 10 PCs, Windows Mixed Reality, Windows 10 Mobile and Surface Hub devices. The new fee structure excludes all games and any purchases on Xbox consoles.

For full details on the new fee structure refer to the Microsoft Store App Developer Agreement. To see an overview of all the changes made to this version of the ADA please visit the change history.

The post Updated Microsoft Store App Developer Agreement: New Revenue Share appeared first on Windows Developer Blog.

Build a CI/CD pipeline for API Management

$
0
0

APIs have become mundane. They have become the de facto standard for connecting apps, data, and services. In the larger picture, APIs are driving digital transformation in organizations.

With the strategic value of APIs, a continuous integration (CI) and continuous deployment (CD) pipeline has become an important aspect of API development. It allows organizations to automate deployment of API changes without error-prone manual steps, detect issues earlier, and ultimately deliver value to end users faster.

This blog walks you through a conceptual framework for implementing a CI/CD pipeline for deploying changes to APIs published with Azure API Management.

The problem

Organizations today normally have multiple deployment environments (e.g., Development, Testing, Production) and use separate API Management instances for each environment. Some of these instances are shared by multiple development teams, who are responsible for different APIs with different release cadences.

As a result, customers often come to us with the following challenges:

  • How to automate deployment of APIs into API Management?
  • How to migrate configurations from one environment to another?
  • How to avoid interference between different development teams who share the same API Management instance?

We believe the approach described below will address all these challenges.

CI/CD with API Management

CI-CD with API Management workflow graph

The proposed approach is illustrated in the above picture. In this example, there are two deployment environments: Development and Production. Each has its own API Management instance. The Production instance is managed by a designated team, called API publishers. API developers only have access to the Development instance. 

The key in this proposed approach is to keep all configurations in Azure Resource Manager templates. These templates should be kept in a source control system. We will use Git as an example. As illustrated in the picture, there is a Publisher repository that contains all configurations of the Production API Management instance in a collection of templates:

  • Service template: Contains all service-level configurations (e.g., pricing tier and custom domains).
  • Shared templates: Contains shared resources throughout an API Management instance (e.g., groups, products, and identity providers).
  • API templates: Includes configurations of APIs and their sub-resources (e.g., operations and policies).
  • Master template: Ties everything together by linking to all templates.

API developers will fork and clone the Publisher repository. In most cases, they will focus on API templates for their APIs and should not change the shared or service templates.

When working with Resource Manager templates, we realize there are two challenges for API developers:

Once developers have finished developing and testing an API, and have generated the API template, they will submit a pull request to the Publisher repository. API publishers can validate the pull request and make sure the changes are safe and compliant. Most of the validations can be automated as part of the CI/CD pipeline. When the changes are approved and merged successfully, API publishers will deploy them to the Production instance. The deployment can also be easily automated with Azure Pipelines.

With this approach, the deployment of API changes into API Management instances can be automated and it is easy to promote changes from one environment to another. Since different API development teams will be working on different sets of API templates, it also reduces the chances of interference between different teams.

Next steps

You can find the guidance, examples, and tools in this GitHub repository. Please give it a try and let us know your feedback and questions.

We realize our customers bring a wide range of engineering cultures and existing automation solutions. The approach and tools provided here are not meant to be a one-size-fits-all solution. That's why we published and open-sourced everything on GitHub, so that you can extend and customize the solution.

Join the Bing Maps team at NAFA 2019 I&E

$
0
0

The Bing Maps team will be at NAFA 2019 Institute & Expo, known as the largest event for fleet professionals, April 15 – 17 in Louisville, Kentucky. Stop by booth #706, to learn more about the Bing Maps suite of premium routing and logistics solutions, including Truck Routing, Isochrone, Distance Matrix, Snap-to-Road, and Location intelligence APIs, as well as our Bing Maps Fleet Tracker solution on GitHub.

With Bing Maps easy-to-use Fleet Management APIs, companies large and small can benefit from the robust, yet cost-effective, solutions that are tailored to the fleet management industry.

NAFA 2019
 

To learn more about the Bing Maps Fleet Management APIs and our licensing options, reach out to a Sales Specialist.

The Bing Maps team

Windows 10 SDK Preview Build 18346 available now!

$
0
0

On Tuesday, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18346 or greater). The Preview SDK Build 18346 contains bug fixes and under development changes to the API surface area.

The Preview SDK can be downloaded from developer section on Windows Insider.

For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice.

Things to note:

  • This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1809 or earlier to the Microsoft Store.
  • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download the Visual Studio 2017 here.
  • This build of the Windows SDK will install ONLY on Windows 10 Insider Preview builds.
  • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following URL: https://go.microsoft.com/fwlink/?prd=11966&pver=1.0&plcid=0x409&clcid=0x409&ar=Flight&sar=Sdsurl&o1=18346 once the static URL is published.

Tools Updates

Message Compiler (mc.exe)

  • The “-mof” switch (to generate XP-compatible ETW helpers) is considered to be deprecated and will be removed in a future version of mc.exe. Removing this switch will cause the generated ETW helpers to expect Vista or later.
  • The “-A” switch (to generate .BIN files using ANSI encoding instead of Unicode) is considered to be deprecated and will be removed in a future version of mc.exe. Removing this switch will cause the generated .BIN files to use Unicode string encoding.
  • The behavior of the “-A” switch has changed. Prior to Windows 1607 Anniversary Update SDK, when using the -A switch, BIN files were encoded using the build system’s ANSI code page. In the Windows 1607 Anniversary Update SDK, mc.exe’s behavior was inadvertently changed to encode BIN files using the build system’s OEM code page. In the 19H1 SDK, mc.exe’s previous behavior has been restored and it now encodes BIN files using the build system’s ANSI code page. Note that the -A switch is deprecated, as ANSI-encoded BIN files do not provide a consistent user experience in multi-lingual systems.

Breaking Changes

Change to effect graph of the AcrylicBrush

In this Preview SDK, we’ll be adding a blend mode to the effect graph of the AcrylicBrush called Luminosity. This blend mode will ensure that shadows do not appear behind acrylic surfaces without a cutout. We will also be exposing a LuminosityBlendOpacity API available for tweaking that allows for more AcrylicBrush customization.

By default, for those that have not specified any LuminosityBlendOpacity on their AcrylicBrushes, we have implemented some logic to ensure that the Acrylic will look as similar as it can to current 1809 acrylics. Please note that we will be updating our default brushes to account for this recipe change.

TraceLoggingProvider.h  / TraceLoggingWrite

Events generated by TraceLoggingProvider.h (e.g. via TraceLoggingWrite macros) will now always have Id and Version set to 0.

Previously, TraceLoggingProvider.h would assign IDs to events at link time. These IDs were unique within a DLL or EXE, but changed from build to build and from module to module.

API Updates, Additions and Removals

Removals:

Note: The following recent removals have been made since earlier flights.


namespace Windows.Networking.PushNotifications {
  public static class PushNotificationChannelManager {
    public static event EventHandler<PushNotificationChannelsRevokedEventArgs> ChannelsRevoked;
  }
  public sealed class PushNotificationChannelsRevokedEventArgs
}
namespace Windows.UI.Composition {
  public sealed class CompositionProjectedShadow : CompositionObject {
    float MaxOpacity { get; set; }
    float MinOpacity { get; set; }
    float OpacityFalloff { get; set; }
  }
  public sealed class CompositionProjectedShadowCaster : CompositionObject {
    Visual AncestorClip { get; set; }
    CompositionBrush Mask { get; set; }
  }
  public enum CompositionProjectedShadowDrawOrder
  public sealed class CompositionProjectedShadowReceiver : CompositionObject {
    CompositionProjectedShadowDrawOrder DrawOrder { get; set; }
    CompositionBrush Mask { get; set; }
  }
  public sealed class CompositionRadialGradientBrush : CompositionGradientBrush {
    Vector2 GradientOrigin { get; set; }
  }
  public interface ICompositorPartner_ProjectedShadow
}

Additions:


namespace Windows.AI.MachineLearning {
  public sealed class LearningModelSession : IClosable {
    public LearningModelSession(LearningModel model, LearningModelDevice deviceToRunOn, LearningModelSessionOptions learningModelSessionOptions);
  }
  public sealed class LearningModelSessionOptions
  public sealed class TensorBoolean : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorBoolean CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorBoolean CreateFromShapeArrayAndDataArray(long[] shape, bool[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorDouble : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorDouble CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorDouble CreateFromShapeArrayAndDataArray(long[] shape, double[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorFloat : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorFloat CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorFloat CreateFromShapeArrayAndDataArray(long[] shape, float[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorFloat16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorFloat16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorFloat16Bit CreateFromShapeArrayAndDataArray(long[] shape, float[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt16Bit CreateFromShapeArrayAndDataArray(long[] shape, short[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt32Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt32Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt32Bit CreateFromShapeArrayAndDataArray(long[] shape, int[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt64Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt64Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt64Bit CreateFromShapeArrayAndDataArray(long[] shape, long[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorInt8Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorInt8Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorInt8Bit CreateFromShapeArrayAndDataArray(long[] shape, byte[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorString : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorString CreateFromShapeArrayAndDataArray(long[] shape, string[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt16Bit CreateFromShapeArrayAndDataArray(long[] shape, ushort[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt32Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt32Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt32Bit CreateFromShapeArrayAndDataArray(long[] shape, uint[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt64Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt64Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt64Bit CreateFromShapeArrayAndDataArray(long[] shape, ulong[] data);
    IMemoryBufferReference CreateReference();
  }
  public sealed class TensorUInt8Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
    void Close();
    public static TensorUInt8Bit CreateFromBuffer(long[] shape, IBuffer buffer);
    public static TensorUInt8Bit CreateFromShapeArrayAndDataArray(long[] shape, byte[] data);
    IMemoryBufferReference CreateReference();
  }
}
namespace Windows.ApplicationModel {
  public sealed class Package {
    StorageFolder EffectiveLocation { get; }
    StorageFolder MutableLocation { get; }
  }
}
namespace Windows.ApplicationModel.AppService {
  public sealed class AppServiceConnection : IClosable {
    public static IAsyncOperation<StatelessAppServiceResponse> SendStatelessMessageAsync(AppServiceConnection connection, RemoteSystemConnectionRequest connectionRequest, ValueSet message);
  }
  public sealed class AppServiceTriggerDetails {
    string CallerRemoteConnectionToken { get; }
  }
  public sealed class StatelessAppServiceResponse
  public enum StatelessAppServiceResponseStatus
}
namespace Windows.ApplicationModel.Background {
  public sealed class ConversationalAgentTrigger : IBackgroundTrigger
}
namespace Windows.ApplicationModel.Calls {
  public sealed class PhoneLine {
    string TransportDeviceId { get; }
    void EnableTextReply(bool value);
  }
  public enum PhoneLineTransport {
    Bluetooth = 2,
  }
  public sealed class PhoneLineTransportDevice
}
namespace Windows.ApplicationModel.Calls.Background {
  public enum PhoneIncomingCallDismissedReason
  public sealed class PhoneIncomingCallDismissedTriggerDetails
  public enum PhoneTriggerType {
    IncomingCallDismissed = 6,
  }
}
namespace Windows.ApplicationModel.Calls.Provider {
  public static class PhoneCallOriginManager {
    public static bool IsSupported { get; }
  }
}
namespace Windows.ApplicationModel.ConversationalAgent {
  public sealed class ConversationalAgentSession : IClosable
  public sealed class ConversationalAgentSessionInterruptedEventArgs
  public enum ConversationalAgentSessionUpdateResponse
  public sealed class ConversationalAgentSignal
  public sealed class ConversationalAgentSignalDetectedEventArgs
  public enum ConversationalAgentState
  public sealed class ConversationalAgentSystemStateChangedEventArgs
  public enum ConversationalAgentSystemStateChangeType
}
namespace Windows.ApplicationModel.Preview.Holographic {
  public sealed class HolographicKeyboardPlacementOverridePreview
}
namespace Windows.ApplicationModel.Resources {
  public sealed class ResourceLoader {
    public static ResourceLoader GetForUIContext(UIContext context);
  }
}
namespace Windows.ApplicationModel.Resources.Core {
  public sealed class ResourceCandidate {
    ResourceCandidateKind Kind { get; }
  }
  public enum ResourceCandidateKind
  public sealed class ResourceContext {
    public static ResourceContext GetForUIContext(UIContext context);
  }
}
namespace Windows.ApplicationModel.UserActivities {
  public sealed class UserActivityChannel {
    public static UserActivityChannel GetForUser(User user);
  }
}
namespace Windows.Devices.Bluetooth.GenericAttributeProfile {
  public enum GattServiceProviderAdvertisementStatus {
    StartedWithoutAllAdvertisementData = 4,
  }
  public sealed class GattServiceProviderAdvertisingParameters {
    IBuffer ServiceData { get; set; }
  }
}
namespace Windows.Devices.Enumeration {
  public enum DevicePairingKinds : uint {
    ProvidePasswordCredential = (uint)16,
  }
  public sealed class DevicePairingRequestedEventArgs {
    void AcceptWithPasswordCredential(PasswordCredential passwordCredential);
  }
}
namespace Windows.Devices.Input {
  public sealed class PenDevice
}
namespace Windows.Devices.PointOfService {
  public sealed class JournalPrinterCapabilities : ICommonPosPrintStationCapabilities {
    bool IsReversePaperFeedByLineSupported { get; }
    bool IsReversePaperFeedByMapModeUnitSupported { get; }
    bool IsReverseVideoSupported { get; }
    bool IsStrikethroughSupported { get; }
    bool IsSubscriptSupported { get; }
    bool IsSuperscriptSupported { get; }
  }
  public sealed class JournalPrintJob : IPosPrinterJob {
    void FeedPaperByLine(int lineCount);
    void FeedPaperByMapModeUnit(int distance);
    void Print(string data, PosPrinterPrintOptions printOptions);
  }
  public sealed class PosPrinter : IClosable {
    IVectorView<uint> SupportedBarcodeSymbologies { get; }
    PosPrinterFontProperty GetFontProperty(string typeface);
  }
  public sealed class PosPrinterFontProperty
  public sealed class PosPrinterPrintOptions
  public sealed class ReceiptPrinterCapabilities : ICommonPosPrintStationCapabilities, ICommonReceiptSlipCapabilities {
    bool IsReversePaperFeedByLineSupported { get; }
    bool IsReversePaperFeedByMapModeUnitSupported { get; }
    bool IsReverseVideoSupported { get; }
    bool IsStrikethroughSupported { get; }
    bool IsSubscriptSupported { get; }
    bool IsSuperscriptSupported { get; }
  }
  public sealed class ReceiptPrintJob : IPosPrinterJob, IReceiptOrSlipJob {
    void FeedPaperByLine(int lineCount);
    void FeedPaperByMapModeUnit(int distance);
    void Print(string data, PosPrinterPrintOptions printOptions);
    void StampPaper();
  }
  public struct SizeUInt32
  public sealed class SlipPrinterCapabilities : ICommonPosPrintStationCapabilities, ICommonReceiptSlipCapabilities {
    bool IsReversePaperFeedByLineSupported { get; }
    bool IsReversePaperFeedByMapModeUnitSupported { get; }
    bool IsReverseVideoSupported { get; }
    bool IsStrikethroughSupported { get; }
    bool IsSubscriptSupported { get; }
    bool IsSuperscriptSupported { get; }
  }
  public sealed class SlipPrintJob : IPosPrinterJob, IReceiptOrSlipJob {
    void FeedPaperByLine(int lineCount);
    void FeedPaperByMapModeUnit(int distance);
    void Print(string data, PosPrinterPrintOptions printOptions);
  }
}
namespace Windows.Globalization {
  public sealed class CurrencyAmount
}
namespace Windows.Graphics.DirectX {
  public enum DirectXPrimitiveTopology
}
namespace Windows.Graphics.Holographic {
  public sealed class HolographicCamera {
    HolographicViewConfiguration ViewConfiguration { get; }
  }
  public sealed class HolographicDisplay {
    HolographicViewConfiguration TryGetViewConfiguration(HolographicViewConfigurationKind kind);
  }
  public sealed class HolographicViewConfiguration
  public enum HolographicViewConfigurationKind
}
namespace Windows.Management.Deployment {
  public enum AddPackageByAppInstallerOptions : uint {
    LimitToExistingPackages = (uint)512,
  }
  public enum DeploymentOptions : uint {
    RetainFilesOnFailure = (uint)2097152,
  }
}
namespace Windows.Media.Devices {
  public sealed class InfraredTorchControl
  public enum InfraredTorchMode
  public sealed class VideoDeviceController : IMediaDeviceController {
    InfraredTorchControl InfraredTorchControl { get; }
  }
}
namespace Windows.Media.Miracast {
  public sealed class MiracastReceiver
  public sealed class MiracastReceiverApplySettingsResult
  public enum MiracastReceiverApplySettingsStatus
  public enum MiracastReceiverAuthorizationMethod
  public sealed class MiracastReceiverConnection : IClosable
  public sealed class MiracastReceiverConnectionCreatedEventArgs
  public sealed class MiracastReceiverCursorImageChannel
  public sealed class MiracastReceiverCursorImageChannelSettings
  public sealed class MiracastReceiverDisconnectedEventArgs
  public enum MiracastReceiverDisconnectReason
  public sealed class MiracastReceiverGameControllerDevice
  public enum MiracastReceiverGameControllerDeviceUsageMode
  public sealed class MiracastReceiverInputDevices
  public sealed class MiracastReceiverKeyboardDevice
  public enum MiracastReceiverListeningStatus
  public sealed class MiracastReceiverMediaSourceCreatedEventArgs
  public sealed class MiracastReceiverSession : IClosable
  public sealed class MiracastReceiverSessionStartResult
  public enum MiracastReceiverSessionStartStatus
  public sealed class MiracastReceiverSettings
  public sealed class MiracastReceiverStatus
  public sealed class MiracastReceiverStreamControl
  public sealed class MiracastReceiverVideoStreamSettings
  public enum MiracastReceiverWiFiStatus
  public sealed class MiracastTransmitter
  public enum MiracastTransmitterAuthorizationStatus
}
namespace Windows.Networking.Connectivity {
  public enum NetworkAuthenticationType {
    Wpa3 = 10,
    Wpa3Sae = 11,
  }
}
namespace Windows.Networking.NetworkOperators {
  public sealed class ESim {
    ESimDiscoverResult Discover();
    ESimDiscoverResult Discover(string serverAddress, string matchingId);
    IAsyncOperation<ESimDiscoverResult> DiscoverAsync();
    IAsyncOperation<ESimDiscoverResult> DiscoverAsync(string serverAddress, string matchingId);
  }
  public sealed class ESimDiscoverEvent
  public sealed class ESimDiscoverResult
  public enum ESimDiscoverResultKind
}
namespace Windows.Perception.People {
  public sealed class EyesPose
  public enum HandJointKind
  public sealed class HandMeshObserver
  public struct HandMeshVertex
  public sealed class HandMeshVertexState
  public sealed class HandPose
  public struct JointPose
  public enum JointPoseAccuracy
}
namespace Windows.Perception.Spatial {
  public struct SpatialRay
}
namespace Windows.Perception.Spatial.Preview {
  public sealed class SpatialGraphInteropFrameOfReferencePreview
  public static class SpatialGraphInteropPreview {
    public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem);
    public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem, Vector3 relativePosition);
    public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem, Vector3 relativePosition, Quaternion relativeOrientation);
  }
}
namespace Windows.Security.Authorization.AppCapabilityAccess {
  public sealed class AppCapability
  public sealed class AppCapabilityAccessChangedEventArgs
  public enum AppCapabilityAccessStatus
}
namespace Windows.Security.DataProtection {
  public enum UserDataAvailability
  public sealed class UserDataAvailabilityStateChangedEventArgs
  public sealed class UserDataBufferUnprotectResult
  public enum UserDataBufferUnprotectStatus
  public sealed class UserDataProtectionManager
  public sealed class UserDataStorageItemProtectionInfo
  public enum UserDataStorageItemProtectionStatus
}
namespace Windows.Storage.AccessCache {
  public static class StorageApplicationPermissions {
    public static StorageItemAccessList GetFutureAccessListForUser(User user);
    public static StorageItemMostRecentlyUsedList GetMostRecentlyUsedListForUser(User user);
  }
}
namespace Windows.Storage.Pickers {
  public sealed class FileOpenPicker {
    User User { get; }
    public static FileOpenPicker CreateForUser(User user);
  }
  public sealed class FileSavePicker {
    User User { get; }
    public static FileSavePicker CreateForUser(User user);
  }
  public sealed class FolderPicker {
    User User { get; }
    public static FolderPicker CreateForUser(User user);
  }
}
namespace Windows.System {
  public sealed class DispatcherQueue {
    bool HasThreadAccess { get; }
  }
  public enum ProcessorArchitecture {
    Arm64 = 12,
    X86OnArm64 = 14,
  }
}
namespace Windows.System.Profile {
  public static class AppApplicability
  public sealed class UnsupportedAppRequirement
  public enum UnsupportedAppRequirementReasons : uint
}
namespace Windows.System.RemoteSystems {
  public sealed class RemoteSystem {
    User User { get; }
    public static RemoteSystemWatcher CreateWatcherForUser(User user);
    public static RemoteSystemWatcher CreateWatcherForUser(User user, IIterable<IRemoteSystemFilter> filters);
  }
  public sealed class RemoteSystemApp {
    string ConnectionToken { get; }
    User User { get; }
  }
  public sealed class RemoteSystemConnectionRequest {
    string ConnectionToken { get; }
    public static RemoteSystemConnectionRequest CreateFromConnectionToken(string connectionToken);
    public static RemoteSystemConnectionRequest CreateFromConnectionTokenForUser(User user, string connectionToken);
  }
  public sealed class RemoteSystemWatcher {
    User User { get; }
  }
}
namespace Windows.UI {
  public sealed class UIContentRoot
  public sealed class UIContext
}
namespace Windows.UI.Composition {
  public enum CompositionBitmapInterpolationMode {
    MagLinearMinLinearMipLinear = 2,
    MagLinearMinLinearMipNearest = 3,
    MagLinearMinNearestMipLinear = 4,
    MagLinearMinNearestMipNearest = 5,
    MagNearestMinLinearMipLinear = 6,
    MagNearestMinLinearMipNearest = 7,
    MagNearestMinNearestMipLinear = 8,
    MagNearestMinNearestMipNearest = 9,
  }
  public sealed class CompositionGraphicsDevice : CompositionObject {
    CompositionMipmapSurface CreateMipmapSurface(SizeInt32 sizePixels, DirectXPixelFormat pixelFormat, DirectXAlphaMode alphaMode);
    void Trim();
  }
  public sealed class CompositionMipmapSurface : CompositionObject, ICompositionSurface
  public sealed class CompositionProjectedShadow : CompositionObject
  public sealed class CompositionProjectedShadowCaster : CompositionObject
  public sealed class CompositionProjectedShadowCasterCollection : CompositionObject, IIterable<CompositionProjectedShadowCaster>
  public sealed class CompositionProjectedShadowReceiver : CompositionObject
  public sealed class CompositionProjectedShadowReceiverUnorderedCollection : CompositionObject, IIterable<CompositionProjectedShadowReceiver>
  public sealed class CompositionRadialGradientBrush : CompositionGradientBrush
  public sealed class CompositionSurfaceBrush : CompositionBrush {
    bool SnapToPixels { get; set; }
  }
  public class CompositionTransform : CompositionObject
  public sealed class CompositionVisualSurface : CompositionObject, ICompositionSurface
  public sealed class Compositor : IClosable {
    CompositionProjectedShadow CreateProjectedShadow();
    CompositionProjectedShadowCaster CreateProjectedShadowCaster();
    CompositionProjectedShadowReceiver CreateProjectedShadowReceiver();
    CompositionRadialGradientBrush CreateRadialGradientBrush();
    CompositionVisualSurface CreateVisualSurface();
  }
  public interface IVisualElement
}
namespace Windows.UI.Composition.Interactions {
  public enum InteractionBindingAxisModes : uint
  public sealed class InteractionTracker : CompositionObject {
    public static InteractionBindingAxisModes GetBindingMode(InteractionTracker boundTracker1, InteractionTracker boundTracker2);
    public static void SetBindingMode(InteractionTracker boundTracker1, InteractionTracker boundTracker2, InteractionBindingAxisModes axisMode);
  }
  public sealed class InteractionTrackerCustomAnimationStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public sealed class InteractionTrackerIdleStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public sealed class InteractionTrackerInertiaStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public sealed class InteractionTrackerInteractingStateEnteredArgs {
    bool IsFromBinding { get; }
  }
  public class VisualInteractionSource : CompositionObject, ICompositionInteractionSource {
    public static VisualInteractionSource CreateFromIVisualElement(IVisualElement source);
  }
}
namespace Windows.UI.Composition.Scenes {
  public enum SceneAlphaMode
  public enum SceneAttributeSemantic
  public sealed class SceneBoundingBox : SceneObject
  public class SceneComponent : SceneObject
  public sealed class SceneComponentCollection : SceneObject, IIterable<SceneComponent>, IVector<SceneComponent>
  public enum SceneComponentType
  public class SceneMaterial : SceneObject
  public class SceneMaterialInput : SceneObject
  public sealed class SceneMesh : SceneObject
  public sealed class SceneMeshMaterialAttributeMap : SceneObject, IIterable<IKeyValuePair<string, SceneAttributeSemantic>>, IMap<string, SceneAttributeSemantic>
  public sealed class SceneMeshRendererComponent : SceneRendererComponent
  public sealed class SceneMetallicRoughnessMaterial : ScenePbrMaterial
  public sealed class SceneModelTransform : CompositionTransform
  public sealed class SceneNode : SceneObject
  public sealed class SceneNodeCollection : SceneObject, IIterable<SceneNode>, IVector<SceneNode>
  public class SceneObject : CompositionObject
  public class ScenePbrMaterial : SceneMaterial
  public class SceneRendererComponent : SceneComponent
  public sealed class SceneSurfaceMaterialInput : SceneMaterialInput
  public sealed class SceneVisual : ContainerVisual
  public enum SceneWrappingMode
}
namespace Windows.UI.Core {
  public sealed class CoreWindow : ICorePointerRedirector, ICoreWindow {
    UIContext UIContext { get; }
  }
}
namespace Windows.UI.Core.Preview {
  public sealed class CoreAppWindowPreview
}
namespace Windows.UI.Input {
  public class AttachableInputObject : IClosable
  public enum GazeInputAccessStatus
  public sealed class InputActivationListener : AttachableInputObject
  public sealed class InputActivationListenerActivationChangedEventArgs
  public enum InputActivationState
}
namespace Windows.UI.Input.Preview {
  public static class InputActivationListenerPreview
}
namespace Windows.UI.Input.Spatial {
  public sealed class SpatialInteractionManager {
    public static bool IsSourceKindSupported(SpatialInteractionSourceKind kind);
  }
  public sealed class SpatialInteractionSource {
    HandMeshObserver TryCreateHandMeshObserver();
    IAsyncOperation<HandMeshObserver> TryCreateHandMeshObserverAsync();
  }
  public sealed class SpatialInteractionSourceState {
    HandPose TryGetHandPose();
  }
  public sealed class SpatialPointerPose {
    EyesPose Eyes { get; }
    bool IsHeadCapturedBySystem { get; }
  }
}
namespace Windows.UI.Notifications {
  public sealed class ToastActivatedEventArgs {
    ValueSet UserInput { get; }
  }
  public sealed class ToastNotification {
    bool ExpiresOnReboot { get; set; }
  }
}
namespace Windows.UI.ViewManagement {
  public sealed class ApplicationView {
    string PersistedStateId { get; set; }
    UIContext UIContext { get; }
    WindowingEnvironment WindowingEnvironment { get; }
    public static void ClearAllPersistedState();
    public static void ClearPersistedState(string key);
    IVectorView<DisplayRegion> GetDisplayRegions();
  }
  public sealed class InputPane {
    public static InputPane GetForUIContext(UIContext context);
  }
  public sealed class UISettings {
    bool AutoHideScrollBars { get; }
    event TypedEventHandler<UISettings, UISettingsAutoHideScrollBarsChangedEventArgs> AutoHideScrollBarsChanged;
  }
  public sealed class UISettingsAutoHideScrollBarsChangedEventArgs
}
namespace Windows.UI.ViewManagement.Core {
  public sealed class CoreInputView {
    public static CoreInputView GetForUIContext(UIContext context);
  }
}
namespace Windows.UI.WindowManagement {
  public sealed class AppWindow
  public sealed class AppWindowChangedEventArgs
  public sealed class AppWindowClosedEventArgs
  public enum AppWindowClosedReason
  public sealed class AppWindowCloseRequestedEventArgs
  public sealed class AppWindowFrame
  public enum AppWindowFrameStyle
  public sealed class AppWindowPlacement
  public class AppWindowPresentationConfiguration
  public enum AppWindowPresentationKind
  public sealed class AppWindowPresenter
  public sealed class AppWindowTitleBar
  public sealed class AppWindowTitleBarOcclusion
  public enum AppWindowTitleBarVisibility
  public sealed class CompactOverlayPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class DefaultPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class DisplayRegion
  public sealed class FullScreenPresentationConfiguration : AppWindowPresentationConfiguration
  public sealed class WindowingEnvironment
  public sealed class WindowingEnvironmentAddedEventArgs
  public sealed class WindowingEnvironmentChangedEventArgs
  public enum WindowingEnvironmentKind
  public sealed class WindowingEnvironmentRemovedEventArgs
}
namespace Windows.UI.WindowManagement.Preview {
  public sealed class WindowManagementPreview
}
namespace Windows.UI.Xaml {
  public class UIElement : DependencyObject, IAnimationObject, IVisualElement {
    Vector3 ActualOffset { get; }
    Vector2 ActualSize { get; }
    Shadow Shadow { get; set; }
    public static DependencyProperty ShadowProperty { get; }
    UIContext UIContext { get; }
    XamlRoot XamlRoot { get; set; }
  }
  public class UIElementWeakCollection : IIterable<UIElement>, IVector<UIElement>
  public sealed class Window {
    UIContext UIContext { get; }
  }
  public sealed class XamlRoot
  public sealed class XamlRootChangedEventArgs
}
namespace Windows.UI.Xaml.Controls {
  public sealed class DatePickerFlyoutPresenter : Control {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public class FlyoutPresenter : ContentControl {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public class InkToolbar : Control {
    InkPresenter TargetInkPresenter { get; set; }
    public static DependencyProperty TargetInkPresenterProperty { get; }
  }
  public class MenuFlyoutPresenter : ItemsControl {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public sealed class TimePickerFlyoutPresenter : Control {
    bool IsDefaultShadowEnabled { get; set; }
    public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
  }
  public class TwoPaneView : Control
  public enum TwoPaneViewMode
  public enum TwoPaneViewPriority
  public enum TwoPaneViewTallModeConfiguration
  public enum TwoPaneViewWideModeConfiguration
}
namespace Windows.UI.Xaml.Controls.Maps {
  public sealed class MapControl : Control {
    bool CanTiltDown { get; }
    public static DependencyProperty CanTiltDownProperty { get; }
    bool CanTiltUp { get; }
    public static DependencyProperty CanTiltUpProperty { get; }
    bool CanZoomIn { get; }
    public static DependencyProperty CanZoomInProperty { get; }
    bool CanZoomOut { get; }
    public static DependencyProperty CanZoomOutProperty { get; }
  }
  public enum MapLoadingStatus {
    DownloadedMapsManagerUnavailable = 3,
  }
}
namespace Windows.UI.Xaml.Controls.Primitives {
  public sealed class AppBarTemplateSettings : DependencyObject {
    double NegativeCompactVerticalDelta { get; }
    double NegativeHiddenVerticalDelta { get; }
    double NegativeMinimalVerticalDelta { get; }
  }
  public sealed class CommandBarTemplateSettings : DependencyObject {
    double OverflowContentCompactYTranslation { get; }
    double OverflowContentHiddenYTranslation { get; }
    double OverflowContentMinimalYTranslation { get; }
  }
  public class FlyoutBase : DependencyObject {
    bool IsConstrainedToRootBounds { get; }
    bool ShouldConstrainToRootBounds { get; set; }
    public static DependencyProperty ShouldConstrainToRootBoundsProperty { get; }
    XamlRoot XamlRoot { get; set; }
  }
  public sealed class Popup : FrameworkElement {
    bool IsConstrainedToRootBounds { get; }
    bool ShouldConstrainToRootBounds { get; set; }
    public static DependencyProperty ShouldConstrainToRootBoundsProperty { get; }
  }
}
namespace Windows.UI.Xaml.Core.Direct {
  public enum XamlPropertyIndex {
    AppBarTemplateSettings_NegativeCompactVerticalDelta = 2367,
    AppBarTemplateSettings_NegativeHiddenVerticalDelta = 2368,
    AppBarTemplateSettings_NegativeMinimalVerticalDelta = 2369,
    CommandBarTemplateSettings_OverflowContentCompactYTranslation = 2384,
    CommandBarTemplateSettings_OverflowContentHiddenYTranslation = 2385,
    CommandBarTemplateSettings_OverflowContentMinimalYTranslation = 2386,
    FlyoutBase_ShouldConstrainToRootBounds = 2378,
    FlyoutPresenter_IsDefaultShadowEnabled = 2380,
    MenuFlyoutPresenter_IsDefaultShadowEnabled = 2381,
    Popup_ShouldConstrainToRootBounds = 2379,
    ThemeShadow_Receivers = 2279,
    UIElement_ActualOffset = 2382,
    UIElement_ActualSize = 2383,
    UIElement_Shadow = 2130,
  }
  public enum XamlTypeIndex {
    ThemeShadow = 964,
  }
}
namespace Windows.UI.Xaml.Documents {
  public class TextElement : DependencyObject {
    XamlRoot XamlRoot { get; set; }
  }
}
namespace Windows.UI.Xaml.Hosting {
  public sealed class ElementCompositionPreview {
    public static UIElement GetAppWindowContent(AppWindow appWindow);
    public static void SetAppWindowContent(AppWindow appWindow, UIElement xamlContent);
  }
}
namespace Windows.UI.Xaml.Input {
  public sealed class FocusManager {
    public static object GetFocusedElement(XamlRoot xamlRoot);
  }
  public class StandardUICommand : XamlUICommand {
    StandardUICommandKind Kind { get; set; }
  }
}
namespace Windows.UI.Xaml.Media {
  public class AcrylicBrush : XamlCompositionBrushBase {
    IReference<double> TintLuminosityOpacity { get; set; }
    public static DependencyProperty TintLuminosityOpacityProperty { get; }
  }
  public class Shadow : DependencyObject
  public class ThemeShadow : Shadow
  public sealed class VisualTreeHelper {
    public static IVectorView<Popup> GetOpenPopupsForXamlRoot(XamlRoot xamlRoot);
  }
}
namespace Windows.UI.Xaml.Media.Animation {
  public class GravityConnectedAnimationConfiguration : ConnectedAnimationConfiguration {
    bool IsShadowEnabled { get; set; }
  }
}
namespace Windows.Web.Http {
  public sealed class HttpClient : IClosable, IStringable {
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryDeleteAsync(Uri uri);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri, HttpCompletionOption completionOption);
    IAsyncOperationWithProgress<HttpGetBufferResult, HttpProgress> TryGetBufferAsync(Uri uri);
    IAsyncOperationWithProgress<HttpGetInputStreamResult, HttpProgress> TryGetInputStreamAsync(Uri uri);
    IAsyncOperationWithProgress<HttpGetStringResult, HttpProgress> TryGetStringAsync(Uri uri);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPostAsync(Uri uri, IHttpContent content);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPutAsync(Uri uri, IHttpContent content);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request);
    IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request, HttpCompletionOption completionOption);
  }
  public sealed class HttpGetBufferResult : IClosable, IStringable
  public sealed class HttpGetInputStreamResult : IClosable, IStringable
  public sealed class HttpGetStringResult : IClosable, IStringable
  public sealed class HttpRequestResult : IClosable, IStringable
}
namespace Windows.Web.Http.Filters {
  public sealed class HttpBaseProtocolFilter : IClosable, IHttpFilter {
    User User { get; }
    public static HttpBaseProtocolFilter CreateForUser(User user);
  }
}

The post Windows 10 SDK Preview Build 18346 available now! appeared first on Windows Developer Blog.

Announcing the Open Sourcing of Windows Calculator

$
0
0

Today, we’re excited to announce that we are open sourcing Windows Calculator on GitHub under the MIT License. This includes the source code, build system, unit tests, and product roadmap. Our goal is to build an even better user experience in partnership with the community. We are encouraging your fresh perspectives and increased participation to help define the future of Calculator.

Image of Windows Calculator

As developers, if you would like to know how different parts of the Calculator app work, easily integrate Calculator logic or UI into your own applications, or contribute directly to something that ships in Windows, now you can. Calculator will continue to go through all usual testing, compliance, security, quality processes, and Insider flighting, just as we do for our other applications. You can learn more about these details in our documentation on GitHub.

GitHub documentation of Windows Calculator

Open Contribution

In addition to reusing and adapting the code in your own apps, anyone can participate in the development of Windows Calculator. Getting involved is simple. The project is “clone-and-go” and development will follow the standard GitHub flow. There are many ways for developers at all stages to contribute:

  • Participate in discussions
  • Report or fix issues
  • Suggest new feature ideas
  • Prototype new features
  • Design and build together with our engineers

Learning

Reviewing the Calculator code is a great way to learn about the latest Microsoft technologies like the Universal Windows Platform, XAML, and Azure Pipelines. Through this project, developers can learn from Microsoft’s full development lifecycle, as well as reuse the code to build their own experiences. It’s also a great example of Fluent app design. To make this even easier, we will be contributing custom controls and API extensions that we use in Calculator and other apps, to projects like the Windows Community Toolkit and the Windows UI Library.

We are happy to welcome all of you to the Windows Calculator team! To get started, check out the Windows Calculator project on GitHub.

The post Announcing the Open Sourcing of Windows Calculator appeared first on Windows Developer Blog.

ASP.NET Core updates in .NET Core 3.0 Preview 3

Announcing .NET Core 3 Preview 3

$
0
0

Today, we are announcing .NET Core 3.0 Preview 3. We would like to update you on the .NET Core 3.0 schedule and introduce you to improvements in .NET Core SDK installers, Docker containers, Range, and Index. We also have updates on the Windows Desktop and Entity Framework projects.

Download and get started with .NET Core 3 Preview 3 right now on Windows, macOS and Linux.

You can see complete details of the release in the .NET Core 3 Preview 3 release notes.

.NET Core 3.0 will be supported in Visual Studio 2019, Visual Studio for Mac and Visual Studio Code. A new C# extension update (install VSIX) has been produced with the latest C# compiler (aligned with .NET Core Preview 3 and Visual Studio 2019 Preview 4).

We recently published an update on Floating-Point Parsing and Formatting improvements in .NET Core 3.0. Please check out that post in you use floating APIs and associated mathematical operations.

Schedule

We have seen people asking questions about when .NET Core 3.0 will be released. They also want to know if .NET Core 3.0 will be included in Visual Studio 2019. Visual Studio 2019 RC was recently released, which is part of the motivation for the questions.

We plan to ship .NET Core 3.0 in the second half of 2019. We will announce the ship date at the Build 2019 conference.

Visual Studio 2019 will be released on April 2nd. .NET Core 2.1 and 2.2 will be included in that release, just like they have been in the Visual Studio 2019 preview builds. At the point of the final .NET Core 3.0 release, we’ll announce support for a specific Visual Studio 2019 update, and .NET Core 3.0 will be included in Visual Studio 2019 from that point on.

We’re tracking to shipping .NET Core 3.0 preview releases every month. By coincidence,the previews now align with the months. We’re releasing Preview 3 today and we’re in the third month. We hope to continue that pattern until the final release.

.NET Core SDK installers will now Upgrade in Place

For Windows, the .NET Core SDK MSI installers will start upgrading patch versions in place. This will reduce the number of SDKs that are installed on both developer and production machines.

The upgrade policy will specifically target .NET Core SDK feature bands. Feature bands are defined in hundreds groups in the patch section of the version number. For example, 3.0.101 and 3.0.201 are versions in two different feature bands while 3.0.101 and 3.0.199 are in the same feature band.

This means when .NET Core SDK 3.0.101 becomes available and is installed, .NET Core SDK 3.0.100 will be removed from the machine if it exists. When .NET Core SDK 3.0.200 becomes available and is installed on the same machine, .NET Core SDK 3.0.101 will not be removed. In that situation, .NET Core SDK 3.0.200 will still be used by default, but .NET Core SDK 3.0.101 (or higher .1xx versions) will still be usable if it is configured for use via global.json.

This approach aligns with the behavior of global.json, which allows roll forward across patch versions, but not feature bands of the SDK. Thus, upgrading via the SDK installer will not result in errors due to a missing SDK. Feature bands also align with side by side Visual Studio installations for those users that install SDKs for Visual Studio use.

We would like feedback on the approach we’ve taken for upgrading the .NET Core SDK in place. We considered a few different policies and chose this one because it is the most conservative. We’d also like to hear if it is important to build similar experience on macOS. For Linux, we use package managers, which have an existing upgrade-in-place behavior.

For more information, please check out:

Docker and cgroup memory Limits

Many developers are packaging and running their application with containers. A key scenario is limiting a container’s resources such as CPU or memory. We implemented support for memory limits back in 2017. Unfortunately, we found that the implementation isn’t aggressive enough to reliably stay under the configured limits and applications are still being OOM killed when memory limits are set (particular <500MB). We are fixing that in .NET Core 3.0. Note that this scenario only applies if memory limits are set.

The Docker resource limits feature is built on top of cgroups, which a Linux kernel feature. From a runtime perspective, we need to target cgroup primitives.

You can limit the available memory for a container with the docker run -m argument, as shown in the following example that creates an Alpine-based container with a 4MB memory limit (and then prints the memory limit):

C:>docker run -m 4mb --rm alpine cat /sys/fs/cgroup/memory/memory.limit_in_bytes
4194304

Mid-way through 2018, we started testing applications with memory limits set below 100MB. The results were not great. This effort was based on enabling .NET Core to run well in containers in IoT scenarios. More recently, we published a design and implementation to satisfy both server/cloud and IoT memory limits cases.

We concluded that the primary fix is to set a GC heap maximum significantly lower than the overall memory limit as a default behavior. In retrospect, this choice seems like an obvious requirement of our implementation. We also found that Java has taken a similar approach, introduced in Java 9 and updated in Java 10.

The following design summary describes the new .NET Core behavior (added in Preview 3) when cgroup limits are set:

  • Default GC heap size: maximum of 20mb or 75% of the memory limit on the container
  • Explicit size can be set as an absolute number or percentage of cgroup limit
  • Minimum reserved segment size per GC heap is 16mb, which will reduce the number of heaps created on machines with a large number of cores and small memory limits

The GC change is the most critical part of our memory limits solution. It is also important to update BCL APIs to honor cgroup settings. Those changes are not included in Preview 3 but will come later. We’d appreciate feedback on which BCL APIs are most important to update first (another example).

Our focus in Preview 3 has been about making Docker limits work well for Linux containers. We’re do the same for Windows containers in upcoming previews.

Docker Publishing Update

Microsoft teams are now publishing container images to the Microsoft Container Registry (MCR). There are two primary reasons for this change:

  • Syndicate Microsoft-provided container images to multiple registries, like Docker Hub and Red Hat.
  • Use Microsoft Azure as a global CDN for delivering Microsoft-provided container images.

On the .NET team, we are now publishing all .NET Core images to MCR. We switched the .NET Core Nightly product repo to MCR 2-3 weeks ago. The .NET Core product repo has now been moved to publishing to MCR as well, and includes .NET Core 3.0 Preview 3. As you can see from these links (if you click on them), we continue to have “home pages” on Docker Hub. We intend for that to continue indefinitely. MCR does not offer such pages, but relies of public registries, like Docker Hub, to provide users with image-related information.

The links to our old repos, such as microsoft/dotnet and microsoft/dotnet-nightly now forward to the new locations. The images that existed at those locations still exists and will not be deleted.

We will continue servicing the floating tags in the old repos for the supported life of the various .NET Core versions. For example, 2.1-sdk, 2.2-runtime, and latest are examples of floating tags that will be serviced. A three-part version tag like 2.1.2-sdk will not be serviced, which was already the case.

We have deleted the 3.0 images in the old repos. We will only be supporting .NET Core 3.0 images in MCR and want to make sure that everyone realizes that MCR is the correct place to pull those images, and transitions to using MCR immediately.

For example, the correct tag string to pull the 3.0 SDK image now looks like the following:

mcr.microsoft.com/dotnet/core/sdk:3.0

The new MCR string will be used with both docker pull and in Dockerfile FROM statements.

This tag string previously looked like the following:

microsoft/dotnet:3.0-sdk

See Publishing .NET Core images to Microsoft Container Registry (MCR) for more information.

Index and Range

Index and Range types were introduced in Preview 1 to support the new C# 8.0 compiler index and range syntax (e.g. array[^10] and array[3..10]). We have polished these types for Preview 3 and added some more APIs to enable using Index and Range with some other types (e.g. String, [Readonly]Span<T>, [Readonly]Memory<T> and Array).

These changes enable the following syntax and scenarios:

// start with int[]
int[] nums = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 };
int lastNum = nums[^1]; // 10
int[] subsetNums = nums[2..6]; // {3, 4, 5, 6}

// Create a Memory<int> using arrayofNums as input
// Create no-copy slices of the array
Memory<int> numbers = nums;
Memory<int> lastTwoNums = numbers.Slice(^2); // {9, 10}
Memory<int> middleNums = numbers.Slice(4..8); // {5, 6, 7, 8}

// Create a substring using a range
string myString = "0123456789ABCDEF";
string substring = myString[0..5]; // "01234"

// Create a Memory<char> using a range
ReadOnlyMemory<char> myChars = myString.AsMemory();
ReadOnlyMemory<char> firstChars = myChars[0..5]; // {'0', '1', '2', '3', '4'}

// Get an offset with an Index
Index indexFromEnd = Index.FromEnd(3); // equivalent to [^3]
int offsetFromLength = indexFromEnd.GetOffset(10); // 7
int value = nums[offsetFromLength]; // 8

// Get an offset with a Range
Range rangeFromEnd = 5..^0;
(int offset, int length) = rangeFromEnd.GetOffsetAndLength(10); // offset = 5, length = 5
Memory<int> values = numbers.Slice(offset, length); // {6, 7, 8, 9, 10}

 

.NET Standard 2.1

This is the first preview that has support for authoring .NET Standard 2.1 libraries. By default, dotnet new classlib continues to create .NET Standard 2.0 libraries as we believe it gives you a better compromise between platform support and feature set. For example, .NET Standard 2.1 won’t be supported by .NET Framework.

In order to target .NET Standard 2.1, you’ll have to edit your project file and change the TargetFramework property to netstandard2.1:

<Project Sdk="Microsoft.NET.Sdk">

  <PropertyGroup>
    <TargetFramework>netstandard2.1</TargetFramework>
  </PropertyGroup>

</Project>

If you’re using Visual Studio, you’ll need Visual Studio 2019 as .NET Standard 2.1 won’t be supported in Visual Studio 2017.

As the version number illustrates, .NET Standard 2.1 is a minor increment over what was added with .NET Standard 2.0 (we added ~3000 APIs). Here are the key additions:

  • Index and Range. This adds both, the types as well as the indexers on the core types, such as String, ArraySegment<T>, and Span<T>.
  • IAsyncEnumerable<T>. This adds support for an async IEnumerable<T>. We don’t currently provide Linq-style extension methods but this gap is filled by the System.Linq.Async community project.
  • Support for Span<T> and friends. This includes both, the span types, as well as the hundreds of helper methods across various types that accept and return spans (e.g. Stream).
  • Reflection emit & capability APIs. This includes Reflection emit itself as well as two new properties on the RuntimeFeature class that allows you to check whether runtime code generation is supported at all (IsDynamicCodeSupported) and whether it’s compiled using a JIT or interpreted (IsDynamicCodeCompiled). This enables you to author .NET Standard libraries that can take advantage of dynamic code generation if available and fallback to other behavior when you run in environments that don’t have JIT and/or don’t have support for IL interpretation.
  • SIMD. .NET had support for SIMD for a while now. We’ve leveraged them to speed up basic operations in the BCL, such as string comparisons. We’ve received quite a few requests to expose these APIs in .NET Standard as the functionality requires runtime support and thus cannot be provided meaningfully as a NuGet package.
  • DbProviderFactories. In .NET Standard 2.0 we added almost all of the primitives in ADO.NET to allow O/R mappers and database implementers to communicate. Unfortunately, DbProviderFactories didn’t make the cut for 2.0 so we’re adding it now. In a nutshell, DbProviderFactories allows libraries and applications to utilize a specific ADO.NET provider without knowing any of its specific types at compile time, by selecting among registered DbProviderFactory instances based on a name, which can be read from, for example, configuration settings.
  • General Goodness. Since .NET Core was open sourced, we’ve added many small features across the base class libraries such as System.HashCode for combining hash codes or new overloads on System.String. There are about 800 new members in .NET Core and virtually all of them got added in .NET Standard 2.1.

For more details on all the API additions and platform support, check out .NET Standard version table on GitHub.

F# Update

We added the following for F# in Preview 3.

  • F# 4.6 (Preview) (in the SDK)
  • dotnet fsi (Preview) command

F# 4.6

  • Anonymous Record support
  • FSharp.Core additions
    • ValueOption function parity with Option
    • tryExactlyOne function for List, Array, and Seq

Learn more at Announcing F# 4.6 Preview.

dotnet fsi preview

F# interactive as a pure .NET Core application:

dotnet fsi --readline

Microsoft (R) F# Interactive version 10.4.0 for F# 4.6
Copyright (c) Microsoft Corporation. All Rights Reserved.

For help type #help;;

> let lst = [ 1; 2; 3; 4; 5];;
val lst : int list = [1; 2; 3; 4; 5]

> let squares = lst |> List.map (fun x -> x * x);;
val squares : int list = [1; 4; 9; 16; 25]

> let sumOfSquares = lst |> List.sumBy (fun x -> x * x);;
val sumOfSquares : int = 55

Has basic colorization of F# types (thanks to Saul Rennison):

F# colorization

 

AssemblyDependencyResolver

AssemblyDependencyResolver is a key component of dynamically loading assemblies.  Given .NET Core’s binder does not do simple probing like ‘next to the assembly’. We added a mechanism to read a deps or flat directory at runtime and resolve assemblies.

So in the past, doing an AssemblyLoadContext.LoadAssemblyFromPath(“…”), would fail to find dependent assemblies.  But using the following works as expected with .NET Core 3.0:

var resolver = new AssemblyDependencyResolver(pluginPath);
var assemblyPath = resolver.ResolveAssemblyToPath(assemblyName);

See examples of this API in use.

DLLMap and Native image resolver events

These new capabilities enable a better experience with working with dynamically loaded native dependencies.

NativeLibrary provides an encapsulation for loading a native library (using the same load logic as .NET Core P/Invoke) and providing the relevant helper functions (getSymbol, etc.). See the following usage pattern:

NativeLibrary.Load(mappedName, assembly, dllImportSearchPath);

We provided callbacks on P/Invoke module load to allow for replacement at runtime. This is similar to what is available with the Mono runtime. This feature an application to dynamically target Windows and Linux without having to do #if and rebuild. See the following usage pattern:

NativeLibrary.SetDllImportResolver(assembly, MapAndLoad);

We added symmetric native library resolution APIs when modules are not found.   Previously this was not possible in the default load context, and required inheriting from a class and overriding a protected method.

AssemblyLoadContext.Default.ResolvingUnmanagedDll += HandlerFail;

We are in the process of adding samples for these new capabilities.

.NET Core Windows Desktop Project Update

We are continuing to make progress on the WPF and WinForms projects. We are still working on transitioning these code bases from a proprietary build system to an open source one, based on dotnet/arcade. We are coming to the tail end of this phase of the project and then expect that remaining parts of the WPF code base will be published afterwards, likely after Preview 4.

High DPI for Windows Forms Applications

As technology progresses, more varied display monitors are avaiable for purchase. Dots-per-inch (DPI), that used to always be 96 DPI, is no longer a constant value. With that comes a need to be able to set different DPI modes for desktop applications.

Before, the only way to set the DPI mode for your .NET Core WinForms application was via P/Invoke. In .NET Core Preview 3 we are adding the ability to get and set High DPI mode by adding the Application.SetHighDpiMode and Application.HighDpiMode.

The SetHighDpiMode method will set the corresponding High DPI mode unless the setting has been set by other means like App.Manifest or P/Invoke before Application.Run.

The possible HighDPIMode values, as expressed by the HighDpiMode enum are:

  • DpiUnaware
  • SystemAware
  • PerMonitor
  • PerMonitorV2
  • DpiUnawareGdiScaled

To learn more about High DPI modes check out High DPI Desktop Application Development on Windows article.

Entity Framework Project Update

In the first 3 previews, the EF team has prioritized API and behavior changes that could cause existing applications to break when upgrading from previous versions. These changes are themselves designed to improve EF Core or to unblock further improvements. But it also means that many other changes, including sought-after new features planned for EF Core 3.0, together with the porting of EF6 to .NET Core, haven’t been completed yet.

Implementing architectural changes early in the release helps us maximize the opportunities to collect customer feedback, understand any unintended impact of the changes, and react accordingly.

You can expect to see more detailed announcements in this blog, as we make progress in the delivery of new features in upcoming previews.

In the meantime, you can visit What’s new in EF Core 3.0 in the product documentation to find more information about the new features planned for 3.0 as well as a detailed description of the breaking changes included so far.

Closing

Take a look at the .NET Core 3.0 Preview 1 and Preview 2 posts if you missed those. With this post, they describe the complete set of new capabilities that have been added so far with the .NET Core 3.0 release.

Thanks for trying out .NET Core 3.0. Please continue to give us feedback, either in the comments or on GitHub. We are listening carefully and will continue to make changes based on your feedback.

The post Announcing .NET Core 3 Preview 3 appeared first on .NET Blog.


Converting an Excel Worksheet into a JSON document with C# and .NET Core and ExcelDataReader

$
0
0

Excel isn't a database, except when it isI've been working on a little idea where I'd have an app (maybe a mobile app with Xamarin or maybe a SPA, I haven't decided yet) for the easily accessing and searching across the 500+ videos from http://friday.azure.com.

HOWEVER. I don't have access to the database that hosts the metadata and while I'm trying to get at least read-only access to it (long story) the best I can do is a giant Excel spreadsheet dump that I was given that has all the video details.

This, of course, is sub-optimal, but regardless of how you feel about it, it's a database. Or, a data source at the very least! Additionally, since it was always going to end up as JSON in a cached in-memory database regardless, it doesn't matter much to me.

In real-world business scenarios, sometimes the authoritative source is an Excel sheet, sometimes it's a SQL database, and sometimes it's a flat file. Who knows?

What's most important (after clean data) is that the process one builds around that authoritative source is reliable and repeatable. For example, if I want to build a little app or one page website, yes, ideally I'd have a direct connection to the SQL back end. Other alternative sources could be a JSON file sitting on a simple storage endpoint accessible with a single HTTP GET. If the Excel sheet is on OneDrive/SharePoint/DropBox/whatever, I could have a small serverless function run when the files changes (or on a daily schedule) that would convert the Excel sheet into a JSON file and drop that file onto storage. Hopefully you get the idea. The goal here is clean, reliable pragmatism. I'll deal with the larger business process issue and/or system architecture and/or permissions issue later. For now the "interface" for my app is JSON.

So I need some JSON and I have this Excel sheet.

Turns out there's a lovely open source project and NuGet package called ExcelDataReader. There's been ways to get data out of Excel for decades. Literally decades. One of my first jobs was automating Microsoft Excel with Visual Basic 3.0 with COM Automation. I even blogged about getting data out of Excel into ASP.NET 16 years ago!

Today I'll use ExcelDataReader. It's really nice and it took less than an hour to get exactly what I wanted. I haven't gone and made it super clean and generic, refactored out a bunch of helper functions, so I'm interested in your thoughts. After I get this tight and reliable I'll drop it into an Azure Function and then focus on getting the JSON directly from the source.

A few gotchas that surprised me. I got a "System.NotSupportedException: No data is available for encoding 1252." Windows-1252 or CP-1252 (code page) is an old school text encoding (it's effectively ISO 8859-1). Turns out newer .NETs like .NET Core need the System.Text.Encoding.CodePages package as well as a call to System.Text.Encoding.RegisterProvider(System.Text.CodePagesEncodingProvider.Instance); to set it up for success. Also, that extra call to reader.Read at the start to skip over the Title row had me pause a moment.

using System;

using System.IO;
using ExcelDataReader;
using System.Text;
using Newtonsoft.Json;

namespace AzureFridayToJson
{
class Program
{
static void Main(string[] args)
{
Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);

var inFilePath = args[0];
var outFilePath = args[1];

using (var inFile = File.Open(inFilePath, FileMode.Open, FileAccess.Read))
using (var outFile = File.CreateText(outFilePath))
{
using (var reader = ExcelReaderFactory.CreateReader(inFile, new ExcelReaderConfiguration()
{ FallbackEncoding = Encoding.GetEncoding(1252) }))
using (var writer = new JsonTextWriter(outFile))
{
writer.Formatting = Formatting.Indented; //I likes it tidy
writer.WriteStartArray();
reader.Read(); //SKIP FIRST ROW, it's TITLES.
do
{
while (reader.Read())
{
//peek ahead? Bail before we start anything so we don't get an empty object
var status = reader.GetString(0);
if (string.IsNullOrEmpty(status)) break;

writer.WriteStartObject();
writer.WritePropertyName("Status");
writer.WriteValue(status);

writer.WritePropertyName("Title");
writer.WriteValue(reader.GetString(1));

writer.WritePropertyName("Host");
writer.WriteValue(reader.GetString(6));

writer.WritePropertyName("Guest");
writer.WriteValue(reader.GetString(7));

writer.WritePropertyName("Episode");
writer.WriteValue(Convert.ToInt32(reader.GetDouble(2)));

writer.WritePropertyName("Live");
writer.WriteValue(reader.GetDateTime(5));

writer.WritePropertyName("Url");
writer.WriteValue(reader.GetString(11));

writer.WritePropertyName("EmbedUrl");
writer.WriteValue($"{reader.GetString(11)}player");
/*
<iframe src="https://channel9.msdn.com/Shows/Azure-Friday/Erich-Gamma-introduces-us-to-Visual-Studio-Online-integrated-with-the-Windows-Azure-Portal-Part-1/player" width="960" height="540" allowFullScreen frameBorder="0"></iframe>
*/

writer.WriteEndObject();
}
} while (reader.NextResult());
writer.WriteEndArray();
}
}
}
}
}

The first pass is on GitHub at https://github.com/shanselman/AzureFridayToJson and the resulting JSON looks like this:

[

{
"Status": "Live",
"Title": "Introduction to Azure Integration Service Environment for Logic Apps",
"Host": "Scott Hanselman",
"Guest": "Kevin Lam",
"Episode": 528,
"Live": "2019-02-26T00:00:00",
"Url": "https://azure.microsoft.com/en-us/resources/videos/azure-friday-introduction-to-azure-integration-service-environment-for-logic-apps",
"embedUrl": "https://azure.microsoft.com/en-us/resources/videos/azure-friday-introduction-to-azure-integration-service-environment-for-logic-appsplayer"
},
{
"Status": "Live",
"Title": "An overview of Azure Integration Services",
"Host": "Lara Rubbelke",
"Guest": "Matthew Farmer",
"Episode": 527,
"Live": "2019-02-22T00:00:00",
"Url": "https://azure.microsoft.com/en-us/resources/videos/azure-friday-an-overview-of-azure-integration-services",
"embedUrl": "https://azure.microsoft.com/en-us/resources/videos/azure-friday-an-overview-of-azure-integration-servicesplayer"
},
...SNIP...

Thoughts? There's a dozen ways to have done this. How would you do this? Dump it into a DataSet and serialize objects to JSON, make an array and do the same, automate Excel itself (please don't do this), and on and on.

Certainly this would be easier if I could get a CSV file or something from the business person, but the issue is that I'm regularly getting new drops of this same sheet with new records added. Getting the suit to Save As | CSV reliably and regularly isn't sustainable.


Sponsor: Stop wasting time trying to track down the cause of bugs. Sentry.io provides full stack error tracking that lets you monitor and fix problems in real time. If you can program it, we can make it far easier to fix any errors you encounter with it.



© 2018 Scott Hanselman. All rights reserved.
     

Making C++ Exception Handling Smaller On x64

$
0
0

Visual Studio 2019 Preview 3 introduces a new feature to reduce the binary size of C++ exception handling (try/catch and automatic destructors) on x64. Dubbed FH4 (for __CxxFrameHandler4, see below), I developed new formatting and processing for data used for C++ exception handling that is ~60% smaller than the existing implementation resulting in overall binary reduction of up to 20% for programs with heavy usage of C++ exception handling.

How Do I Turn This On?

FH4 is currently off by default because the runtime changes required for Store applications could not make it into the current release. To turn FH4 on for non-Store applications, pass the undocumented flag “/d2FH4” to the MSVC compiler in Visual Studio 2019 Preview 3 and beyond.

We plan on enabling FH4 by default once the Store runtime has been updated. We’re hoping to do this in Visual Studio 2019 Update 1 and will update this post one we know more.

Tools Changes

Any installation of Visual Studio 2019 Preview 3 and beyond will have the changes in the compiler and C++ runtime to support FH4. The compiler changes exist internally under the aforementioned “/d2FH4” flag. The C++ runtime sports a new DLL called vcruntime140_1.dll that is automatically installed by VCRedist. This is required to expose the new exception handler __CxxFrameHandler4 that replaces the older __CxxFrameHandler3 routine. Static linking and app-local deployment of the new C++ runtime are both supported as well.

Now onto the fun stuff! The rest of this post will cover the internal results from trialing FH4 on Windows, Office, and SQL, followed by more in-depth technical details behind this new technology.

Motivation and Results

About a year ago, our partners on the C++/WinRT project came to the Microsoft C++ team with a challenge: how much could we reduce the binary size of C++ exception handling for programs that heavily used it?

In context of a program using C++/WinRT, they pointed us to a Windows component Microsoft.UI.Xaml.dll which was known to have a large binary footprint due to C++ exception handling. I confirmed that this was indeed the case and generated the breakdown of binary size with the existing __CxxFrameHandler3, shown below. The percentages in the right side of the chart are percent of total binary size occupied by specific metadata tables and outlined code.

Size Breakdown of Microsoft.UI.Xaml.dll using __CxxFrameHandler3

I won’t discuss in this post what the specific structures on the right side of the chart do (see James McNellis’s talk on how stack unwinding works on Windows for more details). Looking at the total metadata and code however, a whopping 26.4% of the binary size was used by C++ exception handling. This is an enormous amount of space and was hampering adoption of C++/WinRT.

We’ve made changes in the past to reduce the size of C++ exception handling in the compiler without changing the runtime. This includes dropping metadata for regions of code that cannot throw and folding logically identical states. However, we were reaching the end of what we could do in just the compiler and wouldn’t be able to make a significant dent in something this large. Analysis showed that there were significant wins to be had but required fundamental changes in the data, code, and runtime. So we went ahead and did them.

With the new __CxxFrameHandler4 and its accompanying metadata, the size breakdown for Microsoft.UI.XAML.dll is now the following:

Size Breakdown of Microsoft.UI.Xaml.dll using __CxxFrameHandler4

The binary size used by C++ exception handling drops by 64% leading to an overall binary size decrease of 18.6% on this binary. Every type of structure shrank in size by staggering degrees:

EH Data __CxxFrameHandler3 Size (Bytes) __CxxFrameHandler4 Size (Bytes) % Size Reduction
Pdata Entries 147,864 118,260 20.0%
Unwind Codes 224,284 92,810 58.6%
Function Infos 255,440 27,755 89.1%
IP2State Maps 186,944 45,098 75.9%
Unwind Maps 80,952 69,757 13.8%
Catch Handler Maps 52,060 6,147 88.2%
Try Maps 51,960 5,196 90.0%
Dtor Funclets 54,570 45,739 16.2%
Catch Funclets 102,400 4,301 95.8%
Total 1,156,474 415,063 64.1%

 

Combined, switching to __CxxFrameHandler4 dropped the overall size of Microsoft.UI.Xaml.dll from 4.4 MB down to 3.6 MB.

Trialing FH4 on a representative set of Office binaries shows a ~10% size reduction in DLLs that use exceptions heavily. Even in Word and Excel, which are designed to minimize exception usage, there’s still a meaningful reduction in binary size.

Binary Old Size (MB) New Size (MB) % Size Reduction Description
chart.dll 17.27 15.10 12.6% Support for interacting with charts and graphs
Csi.dll 9.78 8.66 11.4% Support for working with files that are stored in the cloud
Mso20Win32Client.dll 6.07 5.41 11.0% Common code that’s shared between all Office apps
Mso30Win32Client.dll 8.11 7.30 9.9% Common code that’s shared between all Office apps
oart.dll 18.21 16.20 11.0% Graphics features that are shared between Office apps
wwlib.dll 42.15 41.12 2.5% Microsoft Word’s main binary
excel.exe 52.86 50.29 4.9% Microsoft Excel’s main binary

 

Trialing FH4 on core SQL binaries shows a 4-21% reduction in size, primarily from metadata compression described in the next section:

Binary Old Size (MB) New Size (MB) % Size Reduction Description
sqllang.dll 47.12 44.33 5.9% Top-level services: Language parser, binder, optimizer, and execution engine
sqlmin.dll 48.17 45.83 4.8% Low-level services: transactions and storage engine
qds.dll 1.42 1.33 6.3% Query store functionality
SqlDK.dll 3.19 3.05 4.4% SQL OS abstractions: memory, threads, scheduling, etc.
autoadmin.dll 1.77 1.64 7.3% Database tuning advisor logic
xedetours.dll 0.45 0.36 21.6% Flight data recorder for queries

 

The Tech

When analyzing what caused the C++ exception handling data to be so large in Microsoft.UI.Xaml.dll I found two primary culprits:

  1. The data structures themselves are large: metadata tables were fixed size with fields of image-relative offsets and integers each four bytes long. A function with a single try/catch and one or two automatic destructors had over 100 bytes of metadata.
  2. The data structures and code generated were not amenable to merging. The metadata tables contained image-relative offsets that prevented COMDAT folding (the process where the linker can fold together identical pieces of data to save space) unless the functions they represented were identical. In addition, catch funclets (outlined code from the program’s catch blocks) could not be folded even if they were code-identical because their metadata is contained in their parents.

To address these issues, FH4 restructures the metadata and code such that:

  1. Previous fixed sized values have been compressed using a variable-length integer encoding that drops >90% of the metadata fields from four bytes down to one. Metadata tables are now also variable length with a header to indicate if certain fields are present to save space on emitting empty fields.
  2. All image-relative offsets that can be function-relative have been made function-relative. This allows COMDAT folding between metadata of different functions with similar characteristics (think template instantiations) and allows these values to be compressed. Catch funclets have been redesigned to no longer have their metadata stored in their parents’ so that any code-identical catch funclets can now be folded to a single copy in the binary.

To illustrate this, let’s look at the original definition for the Function Info metadata table used for __CxxFrameHandler3. This is the starting table for the runtime when processing EH and points to the other metadata tables. This code is available publicly in any VS installation, look for <VS install path>VCToolsMSVC<version>includeehdata.h:

typedef const struct _s_FuncInfo
{
    unsigned int        magicNumber:29;     // Identifies version of compiler
    unsigned int        bbtFlags:3;         // flags that may be set by BBT processing
    __ehstate_t         maxState;           // Highest state number plus one (thus
                                            // number of entries in unwind map)
    int                 dispUnwindMap;      // Image relative offset of the unwind map
    unsigned int        nTryBlocks;         // Number of 'try' blocks in this function
    int                 dispTryBlockMap;    // Image relative offset of the handler map
    unsigned int        nIPMapEntries;      // # entries in the IP-to-state map. NYI (reserved)
    int                 dispIPtoStateMap;   // Image relative offset of the IP to state map
    int                 dispUwindHelp;      // Displacement of unwind helpers from base
    int                 dispESTypeList;     // Image relative list of types for exception specifications
    int                 EHFlags;            // Flags for some features.
} FuncInfo;

This structure is fixed size containing 10 fields each 4 bytes long. This means every function that needs C++ exception handling by default incurs 40 bytes of metadata.

Now to the new data structure (<VS install path>VCToolsMSVC<version>includeehdata4_export.h):

struct FuncInfoHeader
{
    union
    {
        struct
        {
            uint8_t isCatch     : 1;  // 1 if this represents a catch funclet, 0 otherwise
            uint8_t isSeparated : 1;  // 1 if this function has separated code segments, 0 otherwise
            uint8_t BBT         : 1;  // Flags set by Basic Block Transformations
            uint8_t UnwindMap   : 1;  // Existence of Unwind Map RVA
            uint8_t TryBlockMap : 1;  // Existence of Try Block Map RVA
            uint8_t EHs         : 1;  // EHs flag set
            uint8_t NoExcept    : 1;  // NoExcept flag set
            uint8_t reserved    : 1;
        };
        uint8_t value;
    };
};


struct FuncInfo4
{
    FuncInfoHeader header;
    uint32_t bbtFlags;         // flags that may be set by BBT processing


    int32_t  dispUnwindMap;    // Image relative offset of the unwind map
    int32_t  dispTryBlockMap;  // Image relative offset of the handler map
    int32_t  dispIPtoStateMap; // Image relative offset of the IP to state map
    uint32_t dispFrame;        // displacement of address of function frame wrt establisher frame, only used for catch funclets
};

Notice that:

  1. The magic number has been removed, emitting 0x19930522 every time becomes a problem when a program has thousands of these entries.
  2. EHFlags has been moved into the header while dispESTypeList has been phased out due to dropped support of dynamic exception specifications in C++17. The compiler will default to the older __CxxFrameHandler3 if dynamic exception specifications are used.
  3. The lengths of the other tables are no longer stored in “Function Info 4”. This allows COMDAT folding to fold more of the pointed-to tables even if the “Function Info 4” table itself cannot be folded.
  4. (Not explicitly shown) The dispFrame and bbtFlags fields are now variable-length integers. The high-level representation leaves it as an uint32_t for easy processing.
  5. bbtFlags, dispUnwindMap, dispTryBlockMap, and dispFrame can be omitted depending on the fields set in the header.

Taking all this into account, the average size of the new “Function Info 4” structure is now 13 bytes (1 byte header + three 4 byte image relative offsets to other tables) which can scale down even further if some tables are not needed. The lengths of the tables were moved out, but these values are now compressed and 90% of them in Microsoft.UI.Xaml.dll were found to fit within a single byte. Putting that all together, this means the average size to represent the same functional data in the new handler is 16 bytes compared to the previous 40 bytes—quite a dramatic improvement!

For folding, let’s look at the number of unique tables and funclets with the old and new handler:

EH Data Count in __CxxFrameHandler3 Count in __CxxFrameHandler4 % Reduction
Pdata Entries 12,322 9,855 20.0%
Function Infos 6,386 2,747 57.0%
IP2State Map Entries 6,363 2,148 66.2%
Unwind Map Entries 1,487 1,464 1.5%
Catch Handler Maps 2,603 601 76.9%
Try Maps 2,598 648 75.1%
Dtor Funclets 2,301 1,527 33.6%
Catch Funclets 2,603 84 96.8%
Total 36,663 19,074 48.0%

 

The number of unique EH data entries drops by 48% from creating additional folding opportunities by removing RVAs and redesigning catch funclets. I specifically want to call out the number of catch funclets italicized in green: it drops from 2,603 down to only 84. This is a consequence of C++/WinRT translating HRESULTs to C++ exceptions which generates plenty of code-identical catch funclets that can now be folded. Certainly a drop of this magnitude is on the high-end of outcomes but nevertheless demonstrates the potential size savings folding can achieve when the data structures are designed with it in mind.

Performance

With the design introducing compression and modifying runtime execution there was a concern of exception handling performance being impacted. The impact, however, is a positive one: exception handling performance improves with __CxxFrameHandler4 as opposed to __CxxFrameHandler3. I tested throughput using a benchmark program that unwinds through 100 stack frames each with a try/catch and 3 automatic objects to destruct. This was run 50,000 times to profile execution time, leading to overall execution times of:

__CxxFrameHandler3 __CxxFrameHandler4
Execution Time 4.84s 4.25s

 

Profiling showed decompression does introduce additional processing time but its cost is outweighed by fewer stores to thread-local storage in the new runtime design.

Future Plans

As mentioned in the title, FH4 is currently only enabled for x64 binaries. However, the techniques described are extensible to ARM32/ARM64 and to a lesser extent x86. We’re currently looking for good examples (like Microsoft.UI.Xaml.dll) to motivate extending this technology to other platforms—if you think you have a good use case let us know!

The process of integrating the runtime changes for Store applications to support FH4 is in flight. Once that’s done, the new handler will be enabled by default so that everyone can get these binary size savings with no additional effort.

Closing Remarks

For anybody who thinks their x64 binaries could do with some trimming down: try out FH4 (via ‘/d2FH4’) today! We’re excited to see what savings this can provide now that this feature is out in the wild. Of course, if you encounter any issues please let us know in the comments below, by e-mail (visualcpp@microsoft.com), or through Developer Community. You can also find us on Twitter (@VisualC).

Thanks to Kenny Kerr for directing us to Microsoft.UI.Xaml.dll, Ravi Pinjala for gathering the numbers on Office, and Robert Roessler for trialing this out on SQL.

 

The post Making C++ Exception Handling Smaller On x64 appeared first on C++ Team Blog.

Microsoft continues to build the case for data estate modernization on Azure

$
0
0

Special thanks to Rik Tamm-Daniels and the Informatica team for their contribution to this blog post. ​

With the latest release of Azure SQL Data Warehouse, Microsoft doubles-down on Azure SQL DW as one of the core data services for digital transformation on Azure. In addition to the fundamental benefits of agility, on-demand scaling and unlimited compute availability, the most recent price-to-performance metrics from the GigaOM report are one of several the compelling arguments they have made for customers to adopt Azure SQL DW. Interestingly, Microsoft is also announcing the general availability of Azure Data Lake Gen 2 and Azure Data Explorer. Along with Power BI for rich visualization, these enhanced set of capabilities cement Microsoft’s leadership position around Cloud Scale Analytics.

Every day, I speak with joint Informatica and Microsoft customers who are looking to transform their approach to their data estate with a cohesive data lake and cloud data warehousing solution architecture. These customers range from global logistics companies, to auto manufacturers to the world’s largest insurers, and all of them see the tremendous potential of the Microsoft modern data estate approach; in fact, just via Informatica's iPaaS (integration platform-as-a-service) offering, Informatica Intelligent Cloud Services, we’ve seen a significant quarter-to-quarter growth in customer data volumes being moved to Azure SQL DW.

Of course, as compelling as the Azure SQL DW technology is, for many customers, modernizing a legacy enterprise data warehouse is a daunting proposition to even consider. The thought of touching the intricate web of dependencies around the warehouse can keep even the most battle-tested CIO up at night. A key consideration when attempting your own cloud data warehousing/cloud data modernization initiative is to ensure you have intelligence about the existing schemas, lineage and dependencies to enable companies to incrementally unravel the data web surrounding the warehouse, and with laser-like precision, begin to move workloads and use case to Azure SQL DW.

Enter Informatica’s Enterprise Data Catalog with full end-to-end source-to-destination lineage and searchable machine-learning and AI-driven intelligent metadata about what data lives where in the warehouse to clear the fog of complexity and illuminate a clear path to cloud data warehousing. In fact, the concept of discovery and catalog driven-modernization is such a compelling leap forward that Microsoft and Informatica developed a single-sign-on Data Accelerator on Informatica’s Intelligent Cloud Services on Azure that can be accessed directly from the Azure SQL DW management console with your Azure credentials.

image

Data Accelerator for Azure

Want to see how Informatica and Microsoft can jumpstart your cloud data warehousing modernization initiative? Join us on Informatica's world tour of hands-on workshop at a Microsoft Technology Center near you. Workshops are taking place in North America right now and will be coming to EMEA and APJ very soon!

Register here: Cloud Data Warehouse Modernization for Azure Workshop

Intel and Microsoft bring optimizations to deep learning on Azure

$
0
0

This post is co-authored with Ravi Panchumarthy and Mattson Thieme from Intel.

We are happy to announce that Microsoft and Intel are partnering to bring optimized deep learning frameworks to Azure. These optimizations are available in a new offering on the Azure marketplace called the Intel Optimized Data Science VM for Linux (Ubuntu).

Over the last few years, deep learning has become the state of the art for several machine learning and cognitive applications. Deep learning is a machine learning technique that leverages neural networks with multiple layers of non-linear transformations, so that the system can learn from data and build accurate models for a wide range of machine learning problems. Computer vision, language understanding, and speech recognition are all examples of deep learning at play today. Innovations in deep neural networks in these domains have enabled these algorithms to reach human level performance in vision, speech recognition and machine translation. Advances in this field continually excite data scientists, organizations and media outlets alike. To many organizations and data scientists, doing deep learning well at scale poses challenges due to technical limitations.

Often, default builds of popular deep learning frameworks like TensorFlow are not fully optimized for training and inference on CPU. In response, Intel has open-sourced framework optimizations for Intel® Xeon processors. Now, through partnering with Microsoft, Intel is helping you accelerate your own deep learning workloads on Microsoft Azure with this new marketplace offering.

"Microsoft is always looking at ways in which our customers can get the best performance for a wide range of machine learning scenarios on Azure. We are happy to partner with Intel to combine the toolsets from both the companies and offer them in a convenient pre-integrated package on the Azure marketplace for our users” 

– Venky Veeraraghavan, Partner Group Program manager, ML platform team, Microsoft.

Accelerating Deep Learning Workloads on Azure

Built on the top of the popular Data Science Virtual Machine (DSVM), this offer adds on new Python environments that contain Intel’s optimized versions of TensorFlow and MXNet. These optimizations leverage the Intel® Advanced Vector Extensions 512 (Intel® AVX-512) and Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) to accelerate training and inference on Intel® Xeon® Processors. When running on an Azure F72s_v2 VM instance, these optimizations yielded an average of 7.7X speedup in training throughput across all standard CNN topologies. You can find more details on the optimization practice here.

As a data scientist or AI developer, this change is quite transparent. You still code with the standard TensorFlow or MXNet frameworks. You can also use the new set of Python (conda) environments (intel_tensorflow_p36, intel_mxnet_p36) on the DSVM to run your code to take full advantage of all the optimizations on an Intel® Xeon Processor based F-Series or H-Series VM instance on Azure. Since this product is built using the DSVM as the base image, all the rich tools for data science and machine learning are still available to you. Once you develop your code and train your models, you can deploy them for inferencing on either the cloud or edge.

“Intel and Microsoft are committed to democratizing artificial intelligence by making it easy for developers and data scientists to take advantage of Intel hardware and software optimizations on Azure for machine learning applications. The Intel Optimized Data Science Virtual Machine (DSVM) provides up to a 7.7X speedup on existing frameworks without code modifications, benefiting Microsoft and Intel customers worldwide”

Binay Ackalloor, Director Business Development, AI Products Group, Intel.

Performance

In Intel’s benchmark tests run on Azure F72s_v2 instance, here are the results comparing the optimized version of TensorFlow with the standard TensorFlow builds.

Bar graph of Default TensorFlow vs. Intel Optimization for TensorFlow

Figure 1: Intel® Optimization for TensorFlow provides an average of 7.7X increase (average indicated by the red line) in training throughput on major CNN topologies. Run your own benchmarks using tf_cnn_benchmarks. Performance results are based on Intel testing as of 01/15/2019. Find the complete testing configuration here.

Getting Started

To get started with the Intel Optimized DSVM, click on the offer in the Azure Marketplace, then click “GET IT NOW”. Once you answer a few simple questions on the Azure Portal, your VM is created with all the DSVM tool sets and the Intel optimized deep learning frameworks pre-configured and ready to use.

Screenshot of Intel Optimized Data Science VM for Linux in Azure marketplace

The Intel Optimized Data Science VM is an ideal environment to develop and train deep learning models on Intel Xeon processor-based VM instances on Azure. Microsoft and Intel will continue their long partnership to explore additional AI solutions and framework optimizations to other services on Azure like the Azure Machine Learning service and Azure IoT Edge.

Next steps

Visual Studio Code February 2019

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>