I was working/pairing with Damian today because I wanted to get my git commit hashes and build ids embedded into the actual website so I could see exactly what commit is in production.
There's a few things here and it's all in my ASP.NET Web App's main layout page called _layout.cshtml. You can look all about ASP.NET Core 101, .NET and C# over at https://dot.net/videos if you'd like. They've lovely videos.
First, the obvious floating copyright year. Then a few credits that are hard coded.
Next, a call to @System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription which gives me this string ".NET Core 3.1.2" Note that there was a time for a while where that Property was somewhat goofy, but no longer.
I have two kinds of things I want to store along with my build artifact and output.
I want the the Git commit hash of the code that was deployed.
Then I want to link it back to my source control. Note that my site is a private repo so you'll get a 404
I want the Build Number and the Build ID
This way I can link back to my Azure DevOps site
Adding a Git Commit Hash to your .NET assembly
There's lots of Assembly-level attributes you can add to your .NET assembly. One lovely one is AssemblyInformationalVersion and if you pass in SourceRevisionId on the dotnet build command line, it shows up in there automatically. Here's an example:
Sweet. That will put in VERSION+HASH, so we'll pull that out of a utility class Damian made like this (full class will be shown later)
public string GitHash
{
get
{
if (string.IsNullOrEmpty(_gitHash))
{
var version = "1.0.0+LOCALBUILD"; // Dummy version for local dev
var appAssembly = typeof(AppVersionInfo).Assembly;
var infoVerAttr = (AssemblyInformationalVersionAttribute)appAssembly
.GetCustomAttributes(typeof(AssemblyInformationalVersionAttribute)).FirstOrDefault();
if (infoVerAttr != null && infoVerAttr.InformationalVersion.Length > 6)
{
// Hash is embedded in the version after a '+' symbol, e.g. 1.0.0+a34a913742f8845d3da5309b7b17242222d41a21
version = infoVerAttr.InformationalVersion;
}
_gitHash = version.Substring(version.IndexOf('+') + 1);
}
return _gitHash;
}
}
Displaying it is then trivial given the helper class we'll see in a minute. Note that hardcoded paths for my private repo. No need to make things complex.
deployed from commit <a href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~https://github.com/shanselman/hanselminutes-core/commit/@appInfo.GitHash">@appInfo.ShortGitHash</a>
Getting and Displaying Azure DevOps Build Number and Build ID
This one is a little more complex. We could theoretically tunnel this info into an assembly as well but it's just as easy, if not easier to put it into a text file and make sure it's part of the ContentRootPath (meaning it's just in the root of the website's folder).
To be clear, an option: There are ways to put this info in an Attribute but not without messing around with your csproj using some not-well-documented stuff. I like a clean csproj so I like this. Ideally there'd be another thing like SourceRevisionID to carry this metadata.
You'd need to do something like this, and then pull it out with reflection. Meh.
I'm cheating a little as I gave it the .json extension, only because JSON files are copying and brought along as "Content." If it didn't have an extension I would need to copy it manually, again, with my csproj:
So, to be clear, two build variables inside a little text file. Then make a little helper class from Damian. Again, that file is in ContentRootPath and was zipped up and deployed with our web app.
public AppVersionInfo(IHostEnvironment hostEnvironment)
{
_buildFilePath = Path.Combine(hostEnvironment.ContentRootPath, _buildFileName);
}
public string BuildNumber
{
get
{
// Build number format should be yyyyMMdd.# (e.g. 20200308.1)
if (string.IsNullOrEmpty(_buildNumber))
{
if (File.Exists(_buildFilePath))
{
var fileContents = File.ReadLines(_buildFilePath).ToList();
// First line is build number, second is build id
if (fileContents.Count > 0)
{
_buildNumber = fileContents[0];
}
if (fileContents.Count > 1)
{
_buildId = fileContents[1];
}
}
if (string.IsNullOrEmpty(_buildNumber))
{
_buildNumber = DateTime.UtcNow.ToString("yyyyMMdd") + ".0";
}
if (string.IsNullOrEmpty(_buildId))
{
_buildId = "123456";
}
}
return _buildNumber;
}
}
public string BuildId
{
get
{
if (string.IsNullOrEmpty(_buildId))
{
var _ = BuildNumber;
}
return _buildId;
}
}
public string GitHash
{
get
{
if (string.IsNullOrEmpty(_gitHash))
{
var version = "1.0.0+LOCALBUILD"; // Dummy version for local dev
var appAssembly = typeof(AppVersionInfo).Assembly;
var infoVerAttr = (AssemblyInformationalVersionAttribute)appAssembly
.GetCustomAttributes(typeof(AssemblyInformationalVersionAttribute)).FirstOrDefault();
if (infoVerAttr != null && infoVerAttr.InformationalVersion.Length > 6)
{
// Hash is embedded in the version after a '+' symbol, e.g. 1.0.0+a34a913742f8845d3da5309b7b17242222d41a21
version = infoVerAttr.InformationalVersion;
}
_gitHash = version.Substring(version.IndexOf('+') + 1);
}
return _gitHash;
}
}
public string ShortGitHash
{
get
{
if (string.IsNullOrEmpty(_gitShortHash))
{
_gitShortHash = GitHash.Substring(GitHash.Length - 6, 6);
}
return _gitShortHash;
}
}
}
How do we access this class? Simple! It's a Singleton added in one line in Startup.cs's ConfigureServices():
services.AddSingleton<AppVersionInfo>();
Then injected in one line in our _layout.cshtml!
@inject AppVersionInfo appInfo
Then I can use it and it's easy. I could put an environment tag around it to make it only show up in staging:
I could also wrap it all in a cache tag like this. Worst case for a few days/weeks at the start of a new year the Year is off.
<cache expires-after="@TimeSpan.FromDays(30)">
<cache>
Thoughts on this technique?
Sponsor: This week's sponsor is...me! This blog and my podcast has been a labor of love for over 18 years. Your sponsorship pays my hosting bills for both AND allows me to buy gadgets to review AND the occasional taco. Join me!
The increased use of renewables, resiliency challenges, and sustainability concerns are all disrupting the energy industry today. New technologies are accelerating the way we source, store, and distribute energy. With IoT, we can gain new insights about the physical world that enables us to optimize and create more efficient processes, reduce energy waste, and track specific consumption. This is a great opportunity for IoT to support power and utilities (P&U) companies across grid assets, electric vehicles, energy optimization, load balancing, and emissions monitoring.
We've recently published a new IoT Signals report focused on the P&U industry. The report provides an industry pulse on the state of IoT adoption to help inform us how to better serve our partners and customers, as well as help energy companies develop their own IoT strategies. We surveyed global decision-makers in P&U organizations to deliver an industry-level view of the IoT ecosystem, including adoption rates, related technology trends, challenges, and benefits of IoT.
The study found that while IoT is almost universally adopted in P&U, it comes with complexity. Companies are commonly deploying IoT to improve the efficiency of operations and employee productivity, but can be challenged by skills and knowledge shortages, privacy and security concerns, and timing and deployment issues. To summarize the findings:
Top priorities and use cases for IoT in power and utilities
Optimizing processes through automation is critical for P&U IoT use. Top IoT uses cases in P&U include automation-heavy processes such as smart grid automation, energy optimization and load balancing, smart metering, and predictive load forecasting. In support of this, artificial intelligence (AI) is often a component of energy IoT solutions, and they are often budgeted together. Almost all adopters have either already integrated AI into an IoT solution or are considering integration.
Using IoT to improve both data security and employee safety is a top priority. Almost half of decision-makers we talked to use IoT to make their IT practices more secure. Another third are implementing IoT to make their workplaces safer, as well as improve the safety of their employees.
P&U companies also leverage IoT to secure their physical assets. Many P&U companies use IoT to secure various aspects of their operations through equipment management and infrastructure maintenance.
The future is bright with IoT adoption continuing to focus on automation, with growth in adoption for use cases related to optimizing energy and creating more efficient maintenance systems.
Today, customers around the world are telling us they are heavily investing in four common use cases for IoT in the energy sector:
Grid asset maintenance
Visualize your grid’s topology, gather data from grid assets, and define rules to trigger alerts. Use these insights to predict maintenance and provide more safety oversight. Prevent failures and avoid critical downtime by monitoring the performance and condition of your equipment.
Energy optimization and load balancing
Balance energy supply and demand to alleviate pressure on the grid and prevent serious power outages. Avoid costly infrastructure upgrades and gain flexibility by using distributed energy resources to drive energy optimization.
Emissions monitoring and reduction
Monitor emissions in near real-time and make your emissions data more readily available. Work towards sustainability targets and clean energy adoption by enabling greenhouse gas and carbon accounting and reporting.
E-mobility
Remotely maintain and service electric vehicle (EV) charging points that support various charging speeds and vehicle types. Make it easier to own and operate electric vehicles by incentivizing ownership and creating new visibility into energy usage.
Learn more about IoT for energy
Read about the real world customers doing incredible things with IoT for energy where you can learn about market leaders like Schneider Electric making remote asset management easier using predictive analytics.
"Traditionally, machine learning is something that has only run in the cloud … Now, we have the flexibility to run it in the cloud or at the edge—wherever we need it to be." Matt Boujonnier, Analytics Application Architect, Schneider Electric.
Read the blog where we announced Microsoft will be carbon negative by 2030 and discussed our partner Vattenfall delivering a new, highly transparent 24/7 energy matching solution; a first-of-its-kind approach that gives customers the ability to choose the green energy they want and ensure their consumption matches that goal using Azure IoT.
We are committed to helping P&U customers bring their vision to life with IoT, and this starts with simplifying and securing IoT. Our customers are embracing IoT as a core strategy to drive better outcomes for energy providers, energy users, and the planet. We are heavily investing in this space, committing $5 billion in IoT and intelligent edge innovation by 2022, and growing our IoT and intelligent edge partner ecosystem.
When IoT is foundational to a transformation strategy, it can have a significantly positive impact on the bottom line, customer experiences, and products. We are invested in helping our partners, customers, and the broader industry to take the necessary steps to address barriers to success. Read the full IoT Signals energy report and learn how we're helping power and utilities companies embrace the future and unlock new opportunities with IoT.
At Microsoft Ignite, we announced new Microsoft Azure Migrate assessment capabilities that further simplify migration planning. In this post, I will talk about how you can plan migration of physical servers. Using this feature, you can also plan migration of virtual machines of any hypervisor or cloud. You can get started right away with these features by creating an Azure Migrate project or using an existing project.
Previously, Azure Migrate: Server Assessment only supported VMware and Hyper-V virtual machine assessments for migration to Azure. At Ignite 2019, we added physical server support for assessment features like Azure suitability analysis, migration cost planning, performance-based rightsizing, and application dependency analysis. You can now plan at-scale, assessing up to 35K physical servers in one Azure Migrate project. If you use VMware or Hyper-V as well, you can discover and assess both physical and virtual servers in the same project. You can create groups of servers, assess by group and refine the groups further using application dependency information.
While this feature is in preview, the preview is covered by customer support and can be used for production workloads. Let us look at how the assessment helps you plan migration.
Azure suitability analysis
The assessment checks Azure support for each server discovered and determines whether the server can be migrated as-is to Azure. If incompatibilities are found, remediation guidance is automatically provided. You can customize your assessment by changing its properties, and recomputing the assessment. Among other customizations, you can choose a virtual machine series of your choice and specify the uptime of the workloads you will run in Azure.
Cost estimation and sizing
Assessment also provides detailed cost estimates. Performance-based rightsizing assessments can be used to optimize on cost; the performance data of your on-premise server is used to recommend a suitable Azure Virtual Machine and disk SKU. This helps to optimize on cost and right-size as you migrate servers that might be over-provisioned in your on-premise data center. You can apply subscription offers and Reserved Instance pricing on the cost estimates.
Dependency analysis
Once you have established cost estimates and migration readiness, you can plan your migration phases. Using the dependency analysis feature, you can understand which workloads are interdependent and need to be migrated together. This also helps ensure you do not leave critical elements behind on-premise. You can visualize the dependencies in a map or extract the dependency data in a tabular format. You can divide your servers into groups and refine the groups for migration by reviewing the dependencies.
Assess your physical servers in four simple steps
Create an Azure Migrate project and add the Server Assessment solution to the project.
Set up the Azure Migrate appliance and start discovery of your server. To set up discovery, the server names or IP addresses are required. Each appliance supports discovery of 250 servers. You can set up more than one appliance if required.
Once you have successfully set up discovery, create assessments and review the assessment reports.
Use the application dependency analysis features to create and refine server groups to phase your migration.
When you are ready to migrate the servers to Azure, you can use Server Migration to carry out the migration. You can read more about migrating physical servers here. In the coming months, we will add support for application discovery and agentless dependency analysis on physical servers as well.
Note that the inventory metadata gathered is persisted in the geography you select while creating the project. You can select a geography of your choice. Server Assessment is available today in Asia Pacific, Australia, Brazil, Canada, Europe, France, India, Japan, Korea, United Kingdom, and United States geographies.
Get started right away by creating an Azure Migrate project. In the upcoming blogs, we will talk about import-based assessments, application discovery, and agentless dependency analysis.
Resources to get started
Tutorial on how to assess physical servers using Azure Migrate: Server Assessment.
Guide on how to plan an assessment for a large-scale environment. Each appliance supports discovery of 250 servers. You can discover more servers by adding
Tutorial on how to migrate physical servers using Azure Migrate: Server Migration.
How do you move tens of thousands of employees to remote work overnight? With the COVID-19 outbreak spreading around the world, that was the big question on our minds at Microsoft last week.
On February 29, R 3.6.3 was released and is now available for Windows, Linux and Mac systems. This update, codenamed "Holding the Windsock", fixes a few minor bugs, and as a minor update maintains compatibility with scripts and packages written for prior versions of R 3.6.
February 29 is an auspicious date, because that was the day that R 1.0.0 was released to the world: February 29, 2000. In the video below from the CelebRation2020 conference marking the 20th anniversary of R, core member Peter Dalgaard reflects on the origins of R, and releases R 3.6.3 live on stage (at the 33-minute mark).
R has advanced tremendously in the last 20 years, and the R language and the community around it shows no signs of slowing down, as demonstrated by this chart of R package downloads (from Jozef Hajnala's excellent retrospective on R).
R 3.6.3 is likely to be the last update in the R 3.6 series, before R 4.0.0 is released on April 24 with many exciting new features. For more on R 3.6.3, including all the changes in this release, check out the official announcement on the link below.
Today we released new versions of both the Microsoft Emulator and the Windows 10X Emulator Image (Preview) to the Microsoft Store. The updated Microsoft Emulator is version 1.1.54.0 and the updated Windows 10X Emulator Image is version 10.0.19578. This refresh includes many updates to Windows 10X including the Win32 Container. Information on installation and requirements can be found at Get Windows dev tools.
We want your feedback so please use the Feedback Hub!
Features and Issues in this release:
Check for new images in the Emulator!
The Microsoft Emulator version 1.1.54.0 now includes the ability to query the Store for updated images and install them. On first run of the emulator, if there are no images installed, it will prompt to download an image. The developer can also choose to check for new images through the File->’Download emulator images’ menu item.
Test existing applications in the emulator on released versions of Windows
The Windows 10X Emulator Image version 10.0.19578 includes a new EULA that no longer requires it to be installed on a Windows Insiders machine. You can now install it on Windows 10 version 10.0.17763 or higher. With released SDKs, developers can use this new configuration to test their existing apps on the dual-screen devices and to enhance their app experiences with dual-screen patterns; taking advantage of TwoPaneView class and leveraging the Wonder Bar with CompactOverlay.
Reminder, in order to use the Insiders Preview SDK, developers must setup their environment on a Windows Insiders OS
Win32 apps now participate in the windowing model
This update applies the windowing model for Windows 10X to your Win32 apps running in the container. System-defined window placement ensures that users have a consistent and simplified windowing experience that is tailored and appropriate to a smaller, dual-screen, and touch-friendly device. Some gaps remain and will be addressed in future updates.
Additional details can be found in the RelNotes for the release.
A new preview update of Blazor WebAssembly is now available! Here’s what’s new in this release:
Integration with ASP.NET Core static web assets
Token-based authentication
Improved framework caching
Updated linker configuration
Build Progressive Web Apps
Get started
To get started with Blazor WebAssembly 3.2.0 Preview 2 install the latest .NET Core 3.1 SDK.
NOTE: Version 3.1.102 or later of the .NET Core SDK is required to use this Blazor WebAssembly release! Make sure you have the correct .NET Core SDK version by running dotnet --version from a command prompt.
Once you have the appropriate .NET Core SDK installed, run the following command to install the update Blazor WebAssembly template:
dotnet new -i Microsoft.AspNetCore.Components.WebAssembly.Templates::3.2.0-preview2.20160.5
That’s it! You can find additional docs and samples on https://blazor.net.
Upgrade an existing project
To upgrade an existing Blazor WebAssembly app from 3.2.0 Preview 1 to 3.2.0 Preview 2:
Update your package references and namespaces as described below:
Update all Microsoft.AspNetCore.Components.WebAssembly.* package references to version 3.2.0-preview2.20160.5.
In Program.cs add a call to builder.Services.AddBaseAddressHttpClient().
Rename BlazorLinkOnBuild in your project files to BlazorWebAssemblyEnableLinking.
If your Blazor WebAssembly app is hosted using ASP.NET Core, make the following updates in Startup.cs in your Server project:
Rename UseBlazorDebugging to UseWebAssemblyDebugging.
Remove the call to services.AddResponseCompression (response compression is now handled by the Blazor framework).
Replace the call to app.UseClientSideBlazorFiles<Client.Program>() with app.UseBlazorFrameworkFiles().
Replace the call to endpoints.MapFallbackToClientSideBlazor<Client.Program>("index.html") with endpoints.MapFallbackToFile("index.html").
Hopefully that wasn’t too painful!
Integration with ASP.NET Core static web assets
Blazor WebAssembly apps now integrate seamlessly with how ASP.NET Core handles static web assets. Blazor WebAssembly apps can use the standard ASP.NET Core convention for consuming static web assets from referenced projects and packages: _content/{LIBRARY NAME}/{path}. This allows you to easily pick up static assets from referenced component libraries and JavaScript interop libraries just like you can in a Blazor Server app or any other ASP.NET Core web app.
Blazor WebAssembly apps are also now hosted in ASP.NET Core web apps using the same static web assets infrastructure. After all, a Blazor WebAssembly app is just a bunch of static files!
This integration simplifies the startup code for ASP.NET Core hosted Blazor WebAssembly apps and removes the need to have an assembly reference from the server project to the client project. Only the project reference is needed, so setting ReferenceOutputAssembly to false for the client project reference is now supported.
Building on the static web assets support in ASP.NET Core also enables new scenarios like hosting ASP.NET Core hosted Blazor WebAssembly apps in Docker containers. In Visual Studio you can add docker support to your Blazor WebAssembly app by right-clicking on the Server project and selecting Add > Docker support.
Token-based authentication
Blazor WebAssembly now has built-in support for token-based authentication.
Blazor WebAssembly apps are secured in the same manner as Single Page Applications (SPAs). There are several approaches for authenticating users to SPAs, but the most common and comprehensive approach is to use an implementation based on the OAuth 2.0 protocol, such as OpenID Connect (OIDC). OIDC allows client apps, like a Blazor WebAssembly app, to verify the user identity and obtain basic profile information using a trusted provider.
Using the Blazor WebAssembly project template you can now quickly create apps setup for authentication using:
ASP.NET Core Identity and IdentityServer
An existing OpenID Connect provider
Azure Active Directory
Authenticate using ASP.NET Core Identity and IdentityServer
Authentication for Blazor WebAssembly apps can be handled using ASP.NET Core Identity and IdentityServer. ASP.NET Core Identity handles authenticating users while IdentityServer handles the necessary protocol endpoints.
To create a Blazor WebAssembly app setup with authentication using ASP.NET Core Identity and IdentityServer run the following command:
dotnet new blazorwasm --hosted --auth Individual -o BlazorAppWithAuth1
If you’re using Visual Studio, you can create the project by selecting the “ASP.NET Core hosted” option and the selecting “Change Authentication” > “Individual user accounts”.
Run the app and try to access the Fetch Data page. You’ll get redirected to the login page.
Register a new user and log in. You can now access the Fetch Data page.
The Server project is configured to use the default ASP.NET Core Identity UI, as well as IdentityServer, and JWT authentication:
// Add the default ASP.NET Core Identity UI
services.AddDefaultIdentity<ApplicationUser>(options => options.SignIn.RequireConfirmedAccount = true)
.AddEntityFrameworkStores<ApplicationDbContext>();
// Add IdentityServer with support for API authorization
services.AddIdentityServer()
.AddApiAuthorization<ApplicationUser, ApplicationDbContext>();
// Add JWT authentication
services.AddAuthentication()
.AddIdentityServerJwt();
The Client app is registered with IdentityServer in the appsettings.json file:
In the Client project, the services needed for API authorization are added in Program.cs:
builder.Services.AddApiAuthorization();
In FetchData.razor the IAccessTokenProvider service is used to acquire an access token from the server. The token may be cached or acquired without the need of user interaction. If acquiring the token succeeds, it is then applied to the request for weather forecast data using the standard HTTP Authorization header. If acquiring the token silently fails, the user is redirected to the login page:
protected override async Task OnInitializedAsync()
{
var httpClient = new HttpClient();
httpClient.BaseAddress = new Uri(Navigation.BaseUri);
var tokenResult = await AuthenticationService.RequestAccessToken();
if (tokenResult.TryGetToken(out var token))
{
httpClient.DefaultRequestHeaders.Add("Authorization", $"Bearer {token.Value}");
forecasts = await httpClient.GetJsonAsync<WeatherForecast[]>("WeatherForecast");
}
else
{
Navigation.NavigateTo(tokenResult.RedirectUrl);
}
}
Authenticate using an existing OpenID Connect provider
You can setup authentication for a standalone Blazor WebAssembly using any valid OIDC provider. Once you’ve registered your app with the OIDC provider you configure the Blazor WebAssembly app to use that provider by calling AddOidcAuthentication in Program.cs:
You can also setup Blazor WebAssembly apps to use Azure Active Directory (Azure AD) or Azure Active Directory Business-to-Customer (Azure AD B2C) for authentication. When authenticating using Azure AD or Azure AD B2C authentication is handled using the new Microsoft.Authentication.WebAssembly.Msal library, which is based on the Microsoft Authentication Library (MSAL.js).
To learn how to setup authentication for Blazor WebAssembly app using Azure AD or Azure AD B2C see:
This is just a sampling of the new authentication capabilities in Blazor WebAssembly. To learn more about how Blazor WebAssembly supports authentication see Secure ASP.NET Core Blazor WebAssembly.
Improved framework caching
If you look at the network trace of what’s being download for a Blazor WebAssembly app after it’s initially loaded, you might think that Blazor WebAssembly has been put on some sort of extreme diet:
Whoa! Only 159kB? What’s going on here?
When a Blazor WebAssembly app is initially loaded, the runtime and framework files are now stored in the browser cache storage:
When the app loads, it first uses the contents of the blazor.boot.json to check if it already has all of the runtime and framework files it needs in the cache. If it does, then no additional network requests are necessary.
You can still see what the true size of the app is during development by checking the browser console:
Updated linker configuration
You may notice with this preview release that the download size of the app during development is now a bit larger, but build times are faster. This is because we no longer run the .NET IL linker during development to remove unused code. In previous Blazor previews we ran the linker on every build, which slowed down development. Now we only run the linker for release builds, which are typically done as part of publishing the app. When publishing the app with a release build (dotnet publish -c Release), the linker removes any unnecessary code and the download size is much more reasonable (~2MB for the default template).
If you prefer to still run the .NET IL linker on each build during development, you can turn it on by adding <BlazorWebAssemblyEnableLinking>true<BlazorWebAssemblyEnableLinking> to your project file.
Build Progressive Web Apps with Blazor
A Progressive Web App (PWA) is a web-based app that uses modern browser APIs and capabilities to behave like a native app. These capabilities can include:
Working offline and always loading instantly, independently of network speed
Being able to run in its own app window, not just a browser window
Being launched from the host operating system (OS) start menu, dock, or home screen
Receiving push notifications from a backend server, even while the user is not using the app
Automatically updating in the background
A user might first discover and use the app within their web browser like any other single-page app (SPA), then later progress to installing it in their OS and enabling push notifications.
Blazor WebAssembly is a true standards-based client-side web app platform, so it can use any browser API, including the APIs needed for PWA functionality.
Using the PWA template
When creating a new Blazor WebAssembly app, you’re offered the option to add PWA features. In Visual Studio, the option is given as a checkbox in the project creation dialog:
If you’re creating the project on the command line, you can use the --pwa flag. For example,
dotnet new blazorwasm --pwa -o MyNewProject
In both cases, you’re free to also use the “ASP.NET Core hosted” option if you wish, but don’t have to do so. PWA features are independent of how the app is hosted.
Installation and app manifest
When visiting an app created using the PWA template option, users have the option to install the app into their OS’s start menu, dock, or home screen.
The way this option is presented depends on the user’s browser. For example, when using desktop Chromium-based browsers such as Edge or Chrome, an Add button appears within the address bar:
On iOS, visitors can install the PWA using Safari’s Share button and its Add to Homescreen option. On Chrome for Android, users should tap the Menu button in the upper-right corner, then choose Add to Home screen.
Once installed, the app appears in its own window, without any address bar.
To customize the window’s title, color scheme, icon, or other details, see the file manifest.json in your project’s wwwroot directory. The schema of this file is defined by web standards. For detailed documentation, see https://developer.mozilla.org/en-US/docs/Web/Manifest.
Offline support
By default, apps created using the PWA template option have support for running offline. A user must first visit the app while they are online, then the browser will automatically download and cache all the resources needed to operate offline.
Important: Offline support is only enabled for published apps. It is not enabled during development. This is because it would interfere with the usual development cycle of making changes and testing them.
Warning: If you intend to ship an offline-enabled PWA, there are several important warnings and caveats you need to understand. These are inherent to offline PWAs, and not specific to Blazor. Be sure to read and understand these caveats before making assumptions about how your offline-enabled app will work.
To see how offline support works, first publish your app, and host it on a server supporting HTTPS. When you visit the app, you should be able to open the browser’s dev tools and verify that a Service Worker is registered for your host:
Additionally, if you reload the page, then on the Network tab you should see that all resources needed to load your page are being retrieved from the Service Worker or Memory Cache:
This shows that the browser is not dependent on network access to load your app. To verify this, you can either shut down your web server, or instruct the browser to simulate offline mode:
Now, even without access to your web server, you should be able to reload the page and see that your app still loads and runs. Likewise, even if you simulate a very slow network connection, your page will still load almost immediately since it’s loaded independently of the network.
To learn more about building PWAs with Blazor, check out the documentation.
Known issues
There are a few known issues with this release that you may run into:
When building a Blazor WebAssembly app using an older .NET Core SDK you may see the following build error:
error MSB4018: The "ResolveBlazorRuntimeDependencies" task failed unexpectedly.
error MSB4018: System.IO.FileNotFoundException: Could not load file or assembly 'BlazorApp1objDebugnetstandard2.1BlazorApp1.dll'. The system cannot find the file specified.
error MSB4018: File name: 'BlazorApp1objDebugnetstandard2.1BlazorApp1.dll'
error MSB4018: at System.Reflection.AssemblyName.nGetFileInformation(String s)
error MSB4018: at System.Reflection.AssemblyName.GetAssemblyName(String assemblyFile)
error MSB4018: at Microsoft.AspNetCore.Components.WebAssembly.Build.ResolveBlazorRuntimeDependencies.GetAssemblyName(String assemblyPath)
error MSB4018: at Microsoft.AspNetCore.Components.WebAssembly.Build.ResolveBlazorRuntimeDependencies.ResolveRuntimeDependenciesCore(String entryPoint, IEnumerable`1 applicationDependencies, IEnumerable`1 monoBclAssemblies)
error MSB4018: at Microsoft.AspNetCore.Components.WebAssembly.Build.ResolveBlazorRuntimeDependencies.Execute()
error MSB4018: at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
error MSB4018: at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask)
To address this issue, upgrade to version 3.1.102 or later of the .NET Core 3.1 SDK.
You may see the following warning when building from the command-line:
CSC : warning CS8034: Unable to load Analyzer assembly C:Usersuser.nugetpackagesmicrosoft.aspnetcore.components.analyzers3.1.0analyzersdotnetcsMicrosoft.AspNetCore.Components.Analyzers.dll : Assembly with same name is already loaded
This issue will be fixed in a future update to the .NET Core SDK. To workaround this issue, add the <DisableImplicitComponentsAnalyzers>true</DisableImplicitComponentsAnalyzers> property to the project file.
Feedback
We hope you enjoy the new features in this preview release of Blazor WebAssembly! Please let us know what you think by filing issues on GitHub.
Today we are releasing the .NET Core Uninstall Tool for Windows and Mac!
Starting in Visual Studio 2019 version 16.3, Visual Studio manages the versions of the SDK and runtime it installs. In previous versions, SDKs and runtimes were left on upgrade in case those versions were targeted or pinned with global.json. We realized this was not ideal and might have left many unused .NET Core SDKs and runtimes installed on your machine.
Going forward, we’ve updated the Visual Studio behavior. The .NET Core standalone SDK installer also began removing previous patch versions (the last two digits, for example 3.1.1xx) in .NET Core 3.0. If you want a version of the SDK or runtime that was removed during an update, reinstall it from the .NET Core archive. SDKs and runtimes installed with the standalone installers (such as from the .NET archive) are not removed by Visual Studio.
We are releasing the .NET Core Uninstall Tool to help you get your machine into a more manageable state AND save you some disk space!
If you’d like to see what versions of .NET Core SDKs or runtimes are available on your machine, type dotnet --list-sdks or dotnet --list-runtimes, respectively:
If the list is short, you can uninstall them using Add or Remove Programs. If uninstalling via that dialog appears tedious, you can download and use the .NET Core Uninstall Tool! For specific commands and detailed instructions, see the .NET Core Uninstall Tool article.
This is a powerful tool and it’s easy to make a mistake. But don’t worry… you can always either run a repair on Visual Studio or reinstall from the .NET Core archive.
Because this tool is based on installers, it works only on Windows and Mac and not on Linux.
From an Administrative PowerShell I'll see what OpenSSH stuff I have enabled. I can also do this with by typing "Windows Features" from the Start Menu.
> Get-WindowsCapability -Online | ? Name -like 'OpenSSH*'
Name : OpenSSH.Client~~~~0.0.1.0
State : Installed
Name : OpenSSH.Server~~~~0.0.1.0
State : NotPresent
Looks like I have the OpenSSH client stuff but not the server. I can SSH from Windows, but not to.
I'll add it with a similar command with the super weirdo but apparently necessary version thing at the end:
Once this has finished (and you can of course run this with OpenSSH.Client as well to get both sides if you hadn't) then you can start the SSH server (as a Windows Service) with this, then make sure it's running.
Start-Service sshd
Get-Service sshd
Since it's a Windows Service you can see it as "OpenSSH SSH Server" in services.msc as well as set it to start automatically on Startup if you like. You can do that again, from PowerShell if you prefer
Set-Service -Name sshd -StartupType 'Automatic'
Remember that we SSH over port 22 so you'll have a firewall rule incoming on 22 at this point. It's up to you to be conscious of security. Maybe you only allow SSHing into your Windows machine with public keys (no passwords) or maybe you don't mind. Just be aware, it's on you, not me.
Now, from any Linux (or Windows) machine I can SSH into my Windows machine like a pro! Note I'm using the .local domain suffix to make sure I don't get a machine on my VPN (staying in my local subnet)
$ ssh scott@ironheart.local
Microsoft Windows [Version 10.0.19041.113]
(c) 2020 Microsoft Corporation. All rights reserved.
scott@IRONHEART C:Usersscott>pwsh
PowerShell 7.0.0
Copyright (c) Microsoft Corporation. All rights reserved.
https://aka.ms/powershell
Type 'help' to get help.
Loading personal and system profiles took 1385ms.
⚡ scott@IRONHEART>
Note that when I SSH'ed into Windows I got the default cmd.exe shell. Remember also that there's a difference between a console, a terminal, and a shell! I can ssh with any terminal into any machine and end up at any shell. In this case, the DEFAULT was cmd.exe, which is suboptimal.
Configuring the default shell for OpenSSH in Windows
On my server (the Windows machine I'm SSHing into) I will set a registry key to set the default shell. In this case, I'll use open source cross platform PowerShell Core. You can use whatever makes you happy.
Additionally, now that this is set up I can use WinSCP (available on the Window Store) as well as scp (Secure Copy) to transfer files.
Of course you can also use WinRM or PowerShell Remoting over SSH but for my little internal network I've found this mechanism to be simple and clean. Now my shushing around is non-denominational!
Sponsor: Have you tried developing in Rider yet? This fast and feature-rich cross-platform IDE improves your code for .NET, ASP.NET, .NET Core, Xamarin, and Unity applications on Windows, Mac, and Linux.
We’re excited to share that Forrester has named Microsoft as a leader in the inaugural report, The Forrester New Wave™: Function-As-A-Service Platforms, Q1 2020 based on their evaluation of Azure Functions and integrated development tooling. We believe Forrester’s findings reflect the strong momentum of event-driven applications in Azure and our vision, crediting Azure Functions with“robust programming model and integration capabilities”, and also confirm Microsoft’s commitment to be the best technology partner for you as customers call out the responsiveness of Microsoft Azure's "engineering and support teams as key to their success.”
Best-in-class development experience
Azure Functions is an event-driven serverless compute platform with a programming model based on triggers and bindings for accelerated and simplified applications development. Fully integrated with other Azure services and development tools, its end-to-end development experience allows you to build and debug your functions locally on any major platform (Windows, macOS, and Linux), as well as deploy and monitor them in the cloud. You can even deploy the exact same functions code to other environments, such as your own infrastructure or your Kubernetes cluster, enabling seamless hybrid deployments.
In their report, Forrester noted Azure Functions programming model“supports a multitude of programming languages with extensive integration options, … and bindings for Azure Event Hub, and Azure Event Grid helps developers build event-driven microservices.”
Enterprise-grade FaaS platform
Enterprise customers like Chipotle love the velocity and productivity that event-driven architectures bring to developing applications. We are committed to building great experiences that enable the modernization of those enterprise workloads, and the Forrester report states that “strategic adopters of Azure will find that Azure Functions helps integrate Microsoft’s fast-expanding array of cloud services”, making that transformation journey easier. Some of our latest innovations are focused on the needs of enterprise customers, such as the Premium plan to host functions without cold-start for low latency workloads or PowerShell support enabling serverless automation scenarios for cloud and hybrid deployments.
In their report, Forrester also recognized Azure Functions as “a good fit for companies that need stateful functions” thanks to Durable Functions, an extension to the Azure Functions runtime that brings stateful and orchestration capabilities to serverless functions. Durable Functions stands alone in the serverless space, providing stateful functions and a way to define serverless workflows programmatically. Forrester mentioned specifically in the report that “clients modernizing enterprise apps will find that Durable Functions offers an alternative to refactoring existing business logic into bite-size stateless chunks."
ML.NET is an open source and cross-platform machine learning framework made for .NET developers.
Using ML.NET, you can stay in .NET to easily build and consume custom machine learning models for scenarios like sentiment analysis, price prediction, sales forecasting, recommendation, image classification, and more.
Over the past six months, the team has been working hard on fixing bugs, improving documentation, and adding more features and capabilities based on user feedback. This includes:
Enhancements for .NET Core 3.0
Azure training for image classification in Model Builder
Expanded support for ONNX export
Database loader for model training directly against relational databases
Simplified Image Classification API for training image classification models
Support for ML.NET in Jupyter Notebooks
Now we’d like to see how you’re using ML.NET and what features we can add and/or improve to make the framework and tooling even better.
Through the survey below, we would love to get feedback on how we can improve ML.NET. We will use your feedback to drive the direction of ML.NET and update our roadmap.
More than ever, web developers are recognizing that the web must be accessible and inclusive in order to create great experiences for everyone. Nearly half of computer users in the US also use some form of assistive technology (AT). Users of AT may have physical or cognitive disabilities, temporary injuries, hearing or vision loss, or other conditions that necessitate different experiences on the web. Other users may not have an impairment but benefit from the convenience of features such as keyboard navigation.
Driven by this knowledge and Microsoft’s commitment to inclusivity and accessibility, our team of developers, designers, and accessibility experts worked together to bring new accessibility features and improvements to the Microsoft Edge Developer Tools.
Engineers from the Microsoft Edge DevTools team
Developers who use DevTools via the keyboard or with screen readers like NVDA and Narrator should find great improvements in navigating between tabs and viewing detailed information within panes. Our accessibility improvements, informed by W3C’s Web Content Accessibility Guidelines (WCAG), extend beyond essential tab and pane navigation. Complex features like breakpoints and performance details are now accessible, too.
Navigating the Performance tool with NVDA
In some cases, tools were even reimagined or built from scratch. For example, the new Initiator tab makes stack traces accessible by moving them out of a hover element and into their own tab. The stack traces are now in a format that is more compatible with AT.
The request initiator chain in the Initiator tab
We’ve also improved color contrast ratios in the UI of DevTools and ensured that data charts and other information can be visualized in ways other than by color alone.
These DevTools accessibility features and more are all available in the new Microsoft Edge browser for Windows 10, Mac OS X, and legacy Windows (7/8/8.1). But you’ll also find them in Chrome and other Chromium-based browsers, too! With the support of the Google Chrome team and the Chromium community, we’ve committed over 150 changes back into Chromium on DevTools accessibility features alone. We’re proud to share this accessibility work to help improve the experience for millions of developers on Microsoft Edge and other Chromium-based browsers.
And our work isn’t done—we continue to work toward additional features such as:
Support for high-contrast mode for DevTools,
Tooling to simulate high-contrast on websites being debugged in DevTools, allowing developers on both Windows and Mac platforms to test high-contrast layouts, and
Making sure our DevTools meet the accessibility recommendations outlined in the new WCAG 2.1 standards.
The stable version of Microsoft Edge featuring these accessibility improvements is available for download. Give it a try and let us know what you think!
Read what our innovative Microsoft for Education customers have done to keep students engaged while they transition to remote learning in these challenging times.
The Bing Maps Platform offers a full suite of premium routing and logistics APIs for your fleet management and logistics applications, and we are pleased to share with you some of our recent updates to these APIs.
Calculate Route API and Truck Routing API: Avoid border crossing support added
There are a number of scenarios where users may want to be mindful of country borders when calculating a route. For example, when driving from "Mussey Township, MI" to "850 Lake Ave, Rochester, NY", the fastest route is entering Ontario, Canada at Blue Water Bridge, and then coming back to New York, United States, as shown in the bing.com/maps screenshot below:
However, some drivers and vehicles may need to keep their routes within the United States due to visa requirements or customs duties. Using the optional parameter avoid=borderCrossing, Bing Maps Calculate Route API can offer an alternate route, which may have a longer travel time and travel distance, but keeps the route within the United States without crossing country borders. See screenshot below:
In addition to avoid=borderCrossing for avoiding crossing country borders for auto and truck routes, Bing Maps now also supports customizable border crossing restrictions for the Truck Routing API. This optional borderRestriction parameter supports specifying regions or geographic areas where border crossing should be avoided, allowed or minimized. For example, some transport rules may allow trucks to route through some countries, however, strictly prohibit the trucks from entering some other countries. Also, there may be state or federal regulations around transporting certain goods, such as tobacco, vaccines, medicines or controlled substances across a country border or state lines.
The Bing Maps Truck Routing API can now handle these scenarios by simply setting a parameter in the API call. Here's an example scenario: A truck driver may need to drive from Ashville, TN to Jacksonville, FL, with a requirement to avoid driving through the state of Georgia. Here's how the borderRestriction parameter is used to accomplish this:
The black dashed line on the maps illustrates the route without the borderRestriction, which will go through the state of Georgia, while the blue line shows the route as the user has specified, with the state boundary restriction for Georgia (US-GA). See the Calculate a Route API and Calculate a Truck Route API documentation for more details on these features.
Isochrone API: Higher accuracy, improved performance and truck route support
The Bing Maps Team has made several improvements to the Bing Maps Isochrone API. The API now returns more points, which results in smoother and more precise isochrone polygons. Also, with the latest performance improvements, larger isochrone requests return results faster than before.
Also, the Isochrone API now supports "truck" mode. The example below shows a comparison between an auto isochrone and a truck isochrone with the same start point and travel time. As you can see, the truck isochrone polygon covers a smaller area on the map because of the lower speed limits based on vehicle attributes.
For Bing Maps Snap to Road API, vehicle attributes can be defined in the POST body for trucks, and the API can return different speed limits based on vehicle specifications defined by the user. There are cases where road segments have different speed limits for auto and trucks in general even without defined vehicle specs.
In the example below, the red line visualizes the interpolated route between the same input points as connected by the blue line, with speed limits for auto and trucks for the roads (i.e., in the speed limit pairs, the first speed is for auto, and the second speed is for trucks). For the first half of the road segment on I-90 (from west to east) as shown below, the auto and trucks have the same speed limit of 97 kmph, while for the second half of the road segment, the auto speed limit is 113 kmph and the truck speed limit is 97 kmph.
Some road segments have different truck speeds with respect to vehicle specs. For the road segment shown below, in the example on the left where no vehicle specs are defined for the Truck Routing API, the API returns speed limits for auto and those for an undefined truck, which happen to be the same for this road segment. In the example on the right, if the user defines the vehicle specs, (e.g., {"VehicleWeight":[8300]}), the API returns 113 kmph for auto and 96 kmph for trucks with weight of 8300 kg. For more details about truck attributes, please check the Calculate a Truck Route API documentation.
Multi-Itinerary Optimization API: Increased number of agents and locations
After the launch of the Bing Maps Multi-Itinerary Optimization API at Microsoft Ignite 2019, the Bing Maps Team has been busy listening to customer feedback. On that note, we're happy to announce that the Multi-Itinerary Optimization API now supports up to 20 agents (previously 10) and 300 locations (previously 100). We’ll continue to make improvements to the optimization algorithms to support even larger requests and to ensure accurate travel time estimates in the itineraries.
We hope the new features and enhancements within these Bing Maps Routing and Logistics APIs continue to empower you to create innovative solutions for your customers and users.
From speaking to desktop developers, we’ve heard that you want to learn how to quickly set up continuous integration and continuous deployment (CI/CD) workflows for your WPF and Windows Forms applications in order to take advantage of the many benefits CI/CD pipelines have to offer, such as:
Catch bugs early in the development cycle
Improve software quality and reliability
Ensure consistent quality of builds
Deploy new features quickly and safely, improving release cadence
Fix issues quickly in production by rolling forward new deployments
With GitHub Actions, you can quickly and easily automate your software workflows with CI/CD.
Integrate code changes directly into GitHub to speed up development cycles
Trigger builds to quickly identify build breaks and create testable debug builds
Continuously run tests to identify and eliminate bugs
Automatically build, sign, package and deploy branches that pass CI
The sample application demonstrates how to author the YAML files that comprise the DevOps workflow in GitHub. In the step-by-step walkthrough you’ll learn:
How to author YAML files to take advantage of multiple channels, so that you can build different versions of your application for test, sideload deployment and the Microsoft Store.
Best-practices for securely storing passwords and other secrets in GitHub, ensuring you protect your valuable assets.
How to enable Publish Profiles in your WPF and Windows Forms applications, files that store information about your publish targets such as the deployment location, target framework, and target runtime. Publish Profiles are referenced by the Windows Application Packaging project and simplify the build and packaging steps of your DevOps pipeline making the authoring process much easier.
With many employees suddenly working from home, there are things an organization and employees can do to help remain productive without increasing risk.
Buckle up kids, as this is a tale. As you may know, I have a lovely podcast at https://hanselminutes.com. You should listen.
Recently through an number of super cool random events I got the opportunity to interview actor Chris Conner who plays Poe on Altered Carbon. I'm a big fan of the show but especially Chris. You should watch the show because Poe is a joy and Chris owns every scene, and that's with a VERY strong cast.
I usually do my interviews remotely for the podcast but I wanted to meet Chris and hang out in person so I used my local podcasting rig which consists of a Zoom H6 recorder.
I have two Shure XLR mics, a Mic stand, and the Zoom. The Zoom H6 is a very well though of workhorse and I've used it many times before when recording shows. It's not rocket surgery but one should always test their things.
I didn't want to take any chances to I picked up a 5 pack of 32GIG high quality SD Cards. I put a new one in the Zoom, the Zoom immediately recognized the SD Card so I did a local recording right there and played it back. Sounds good. I played it back locally on the Zoom and I could hear the recording from the Zoom's local speaker. It's recording the file in stereo, one side for each mic. Remember this for later.
I went early to the meet and set up the whole recording setup. I hooked up a local monitor and tested again. Records and plays back locally. Cool. Chris shows up, we recorded a fantastic show, he's engaged and we're now besties and we go to Chipotle, talk shop, Sci-fi, acting, AIs, etc. Just a killer afternoon all around.
I head home and pull out the SD Card and put it into the PC and I see this. I almost vomit. I get lightheaded.
I've been recording the show for over 730 episodes over 14 years and I've never lost a show. I do my homework - as should you. I'm reeling. Ok, breathe. Let's work the problem.
Right click the drive, check properties. Breathe. This is a 32 gig drive, but Windows sees that it's got 329 MB used. 300ish megs is the size of a 30 minute long two channel WAV file. I know this because I've looked at 300 meg files for the last several hundred shows. Just like you might know roughly the size of a JPEG your camera makes. It's a thing you know.
Command line time. List the root directory. Empty. Check it again but "show all files," weird, there's a Mac folder there but maybe the SD Card was preformatted on a Mac.
Interesting Plot Point - I didn't format the SD card. I use it as it came out of the packaging from Amazon. It came preformatted and I accepted it. I tested it and it worked but I didn't "install my own carpet." I moved in to the house as-is.
What about a little "show me all folders from here down" action? Same as I saw in Windows Explorer. The root folder has another subfolder which is itself. It's folder "Inception" with no Kick!
G:>dir /a
Volume in drive G has no label.
Volume Serial Number is 0403-0201
Directory of G:
03/12/2020 12:29 PM <DIR>
03/13/2020 12:44 PM <DIR> System Volume Information
0 File(s) 0 bytes
2 Dir(s) 30,954,225,664 bytes free
G:>dir /s
Volume in drive G has no label.
Volume Serial Number is 0403-0201
Directory of G:
03/12/2020 12:29 PM <DIR>
0 File(s) 0 bytes
Directory of G:
03/12/2020 12:29 PM <DIR>
0 File(s) 0 bytes
IT GOES FOREVER
Ok, the drive thinks there's data but I can't see it. I put the SD card back in the Zoom and try to play it back.
The Zoom can see folders and files AND the interview itself. And the Zoom can play it back. The Zoom is an embedded device with an implementation of the FAT32 file system and it can read it, but Windows can't. Can Linux? Can a Mac?
Short answer. No.
Hacky Note: Since the Zoom can see and play the file and it has a headphone/monitor jack, I could always plug in an analog 1/8" headphone cable to a 1/4" input on my Peavy PV6 Mixer and rescue the audio with some analog quality loss. Why don't I use the USB Audio out feature of the Zoom H6 and play the file back over a digital cable, you ask? Because the Zoom audio player doesn't support that. It supports three modes - SD Card Reader (which is a pass through to Windows and shows me the recursive directories and no files), an Audio pass-through which lets the Zoom look like an audio device to Windows but doesn't show the SD card as a drive or allow the SD Card to be played back over the digital interface, or its main mode where it's recording locally.
It's Forensics Time, Kids.
We have an 32 SD Card - a disk drive as it were - that is standard FAT32 formatted, that has 300-400 megs of a two-channel (Chris and I had two mics) WAV file that was recorded locally by the Zoom H6 audio reorder and I don't want too lose it or mess it up.
I need to take a byte for byte image of what's on the SD Card so I can poke and it and "virtually" mess with with it, change it, fix it, try again, without changing the physical.
"dd" is a command-line utility with a rich and storied history going back 45 years. Even though it means "Data Definition" it'll always be "disk drive" I my head.
How to clone a USB Drive or SD Card to an IMG file on Windows
I have a copy of dd for Windows which lets me get a byte for byte stream/file that represents this SD Card. For example I could get an entire USD device:
I need to know the Harddisk number and Partition number as you can see above. I usually use diskpart for this.
>diskpart
Microsoft DiskPart version 10.0.19041.1
Copyright (C) Microsoft Corporation.
On computer: IRONHEART
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
-------- ------------- ------- ------- --- ---
Disk 0 Online 476 GB 0 B *
Disk 1 Online 1863 GB 0 B *
Disk 2 Online 3725 GB 0 B
Disk 3 Online 2794 GB 0 B *
Disk 8 Online 29 GB 3072 KB
IF and OF are input file and output file, and I will do it for the whole size of the SD Card. It's likely overkill though as we'll see in a second.
This file ended up being totally massive and hard to work with. Remember I needed just the first 400ish megs? I'll chop of just that part.
dd if=ZOMG.img of=SmallerZOMG.img bs=1M count=400
What is this though? Remember it's an image of a File System. It just bytes in a file. It's not a WAV file or a THIS file or a THAT file. I mean, it is if we decide it is, but in fact, a way to think about it is that it's a mangled envelope that is dark when I peer inside it. We're gonna have to feel around and see if we can rebuild a sense of what the contents really are.
Importing Raw Bytes from an IMG into Audition or Audacity
Both Adobe Audition and Audacity are audio apps that have an "Import RAW Data" feature. However, I DO need to tell Audition how to interpret it. There's lots of WAV files out there. How many simples were there? 1 channel? 2 channel? 16 bit or 32 bit? Lots of questions.
Can I just import this 4 gig byte array of a file system and get something?
Looks like something. You can see that the first part there is likely the start of the partition table, file system headers, etc. before audio data shows up. Here's importing as 2 channel.
I can hear voices but they sound like chipmunks and aren't understandable. Something is "doubled." Sample rate? No, I double checked it.
Here's 1 channel raw data import even though I think it's two.
Now THIS is interesting. I can hear audio at normal speed of us talking (after the preamble) BUT it's only a syllable at a time, and then a quieter version of the same syllable repeats. I don't want to (read: can't really) reassemble a 30 min interview from syllables, right?
Remember when I said that the Zoom H6 records a two channel file with one channel per mic? Not really. It records ONE FILE PER CHANNEL. A whateverL.wav and a whateverR.wav. I totally forgot!
This "one channel" file above is actually the bytes as they were laid down on disk, right? It's actually two files written simultaneously, a few kilobytes at a time, L,R,L,R,L,R. And here I am telling my sound software to treat this "byte for byte file system dump" as one file. It's two that were made at the same time.
It's like the Brundlefly. How do I tease it apart? Well I can't treat the array as a raw file anymore, it's not. And I want (really don't have the energy yet) to write my own little app to effectively de-interlace this image. I also don't know if the segment size is perfectly reliable or if it varies as the Zoom recorded.
NOTE: Pete Brown has written about RIFF/WAV files from Sound Devices records having an incorrect FAT32 bit set. This isn't that, but it's in the same family and is worth noting if you ever have an issue with a Broadcast Wave File getting corrupted or looking encrypted.
Whole helping me work this issue, Pete Brown tweeted a hexdump of the Directory Table so you can see the Zoom0001, Zoom0002, etc directories there in the image.
Let me move into Ubuntu on my Windows machine running WSL. Here I can run fdisk and get some sense of what this Image of the bad SD Card is. Remember also that I hacked off the first 0-400 Megs but this IMG file thinks it's a 32gig drive, because it is. It's just that's been aggressively truncated.
Device Boot Start End Sectors Size Id Type
SmallerZOMG.img1 8192 61157375 61149184 29.2G c W95 FAT32 (LBA)
Maybe I can "mount" this IMG? I make a folder on Ubuntu/WSL2 called ~/recovery. Yikes, ok there's nothing there. I can take the sector size 512 times the Start block of 8192 and use that as the offset.
sudo mount -o loop,offset=4194304 SmallerShit.img recover/
$ cd recover/
$ ll
total 68
drwxr-xr-x 4 root root 32768 Dec 31 1969 ./
Ali Mosajjal thinks perhaps "they re-wrote the FAT32 structure definition and didn't use a standard library and made a mistake," and Leandro Pereria postulates "what could happen is that the LFN (long file name) checksum is invalid and they didn't bother filling in the 8.3 filename... so that complying implementations of VFAT tries to look at the fallback 8.3 name, it's all spaces and figures out "it's all padding, move along."
Ali suggested running dosfsck on the mounted image and you can see again that the files are there, but there's like 3 root entries? Note I've done a cat of /proc/mounts to see the loop that my img is mounted on so I can refer to it in the dosfsck command.
$ sudo dosfsck -w -r -l -a -v -t /dev/loop3
fsck.fat 4.1 (2017-01-24)
Checking we can access the last sector of the filesystem
Boot sector contents:
System ID " "
Media byte 0xf8 (hard disk)
512 bytes per logical sector
32768 bytes per cluster
1458 reserved sectors
First FAT starts at byte 746496 (sector 1458)
2 FATs, 32 bit entries
3821056 bytes per FAT (= 7463 sectors)
Root directory start at cluster 2 (arbitrary size)
Data area starts at byte 8388608 (sector 16384)
955200 data clusters (31299993600 bytes)
63 sectors/track, 255 heads
8192 hidden sectors
61149184 sectors total
Checking file /
Checking file /
Checking file /
Checking file /System Volume Information (SYSTEM~1)
Checking file /.
Checking file /..
Checking file /ZOOM0001
Checking file /ZOOM0002
Checking file /ZOOM0003
Checking file /ZOOM0001/.
Checking file /ZOOM0001/..
Checking file /ZOOM0001/ZOOM0001.hprj (ZOOM00~1.HPR)
Checking file /ZOOM0001/ZOOM0001_LR.WAV (ZOOM00~1.WAV)
Checking file /ZOOM0002/.
Checking file /ZOOM0002/..
Checking file /ZOOM0002/ZOOM0002.hprj (ZOOM00~1.HPR)
Checking file /ZOOM0002/ZOOM0002_Tr1.WAV (ZOOM00~1.WAV)
Checking file /ZOOM0002/ZOOM0002_Tr2.WAV (ZOOM00~2.WAV)
Checking file /ZOOM0003/.
Checking file /ZOOM0003/..
Checking file /ZOOM0003/ZOOM0003.hprj (ZOOM00~1.HPR)
Checking file /ZOOM0003/ZOOM0003_Tr1.WAV (ZOOM00~1.WAV)
Checking file /ZOOM0003/ZOOM0003_Tr2.WAV (ZOOM00~2.WAV)
Checking file /System Volume Information/.
Checking file /System Volume Information/..
Checking file /System Volume Information/WPSettings.dat (WPSETT~1.DAT)
Checking file /System Volume Information/ClientRecoveryPasswordRotation (CLIENT~1)
Checking file /System Volume Information/IndexerVolumeGuid (INDEXE~1)
Checking file /System Volume Information/AadRecoveryPasswordDelete (AADREC~1)
Checking file /System Volume Information/ClientRecoveryPasswordRotation/.
Checking file /System Volume Information/ClientRecoveryPasswordRotation/..
Checking file /System Volume Information/AadRecoveryPasswordDelete/.
Checking file /System Volume Information/AadRecoveryPasswordDelete/..
Checking for bad clusters.
We can see them, but can't get at them with the vfat file system driver on Linux or with Windows.
The DUMP.exe util as part of mtools for Windows is amazing but I'm unable to figure out what is wrong in the FAT32 file table. I can run minfo on the Linux command land telling it to skip 8192 sectors in with the @@offset modifier:
bootsector information
======================
banner:" "
sector size: 512 bytes
cluster size: 64 sectors
reserved (boot) sectors: 1458
fats: 2
max available root directory slots: 0
small size: 0 sectors
media descriptor byte: 0xf8
sectors per fat: 0
sectors per track: 63
heads: 255
hidden sectors: 8192
big size: 61149184 sectors
physical drive id: 0x80
reserved=0x0
dos4=0x29
serial number: 04030201
disk label=" "
disk type="FAT32 "
Big fatlen=7463
Extended flags=0x0000
FS version=0x0000
rootCluster=2
infoSector location=1
backup boot sector=6
Infosector:
signature=0x41615252
free clusters=944648
last allocated cluster=10551
Ok, now we've found yet ANOTHER way to mount this corrupted file system. With mtools we'll use mdir to list the root directory. Note there is something wrong enough that I have to set mtools_skip_check=1 to ~/.mtoolsrc and continue.
$ mdir -i ZOMG.img@@8192S ::
Total number of sectors (61149184) not a multiple of sectors per track (63)!
Add mtools_skip_check=1 to your .mtoolsrc file to skip this test
$ pico ~/.mtoolsrc
$ mdir -i ZOMG.img@@8192S ::
Volume in drive : is
Volume Serial Number is 0403-0201
Directory for ::/
I can see I seek'ed to the right spot, as the string FAT32 is just hanging out. Maybe I can clip out this table and visualize it in a better graphical tool.
I could grab a reasonable (read: arbitrary) chunk from this offset and put it in a very small manageable file:
And then load it in dump.exe on Windows which is really a heck of a tool. It seems to be thinking thinking there's multiple FAT Root Entries (which might be why I'm seeing this weird ghost root). Note the "should be" parts as well.
The most confusing part is that the FAT32 signature - the magic number is always supposed to be 0x41615252. Google that. You'll see. It's a hardcoded signature but maybe I've got the wrong offset and at that point all bets are off.
So do I have that? I can search a binary file for Hex values with a combo of xxd and grep. Note the byte swap:
Just before this is 55 AA which is the last two bytes of the 64 byte partition table.
Now do I have two FAT32 info blocks and three Root Entries? I'm lost. I'll update this part as I learn more.
7zip all the things
Here's where it gets weird and it got so weird that both Pete Brown and I were like, WELL. THAT'S AMAZING.
On a whim I right-clicked the IMG file and opened it in 7zip and saw this.
See that directory there that's a nothing? A space? A something. It has no Short Name. It's an invalid entry but 7zip is cool with it. Let's go in. Watch the path and the \. That's a path separator, nothing, and another path separator. That's not allowed or OK but again, 7zip is chill.
I dragged the files out and they're fine! The day is saved.
The moral? There are a few I can see.
Re-format the random SD cards you get from Amazon specifically on the device you're gonna use them.
FAT as a spec has a bunch of stuff that different "drivers" (Windows, VFAT, etc) may ignore or elide over or just not implement.
I've got 85% of the knowledge I need to spelunk something like this but that last 15% is a brick wall. I would need more patience and to read more about this.
Knowing how to do this is useful for any engineer. It's the equivalent of knowing how to drive a stick shift in an emergency even if you usually use Lyft.
I'm clearly not an expert but I do have a mental model that includes (but not limited to) bytes on the physical media, the file system itself, file tables, directory tables, partition tables, how they kinda work on Linux and Windows.
I clearly hit a wall as I know what I want to do but I'm not sure the next step.
There's a bad Directory Table Entry. I want to rename it and make sure it's complete and to spec.
7zip is amazing. Try it first for basically everything.
Ideally I'd be able to update this post with exactly what byte is wrong and how to fix it. Thanks to Ali, Pete, and Leandro for playing with me!
Your thoughts? (If you made it this far the truncated IMG of the 32 gig SD is here (500 megs) but you might have to pad it out with zeros to make some tools like it.
Sponsor: Have you tried developing in Rider yet? This fast and feature-rich cross-platform IDE improves your code for .NET, ASP.NET, .NET Core, Xamarin, and Unity applications on Windows, Mac, and Linux.