Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

AzureRMR 2.3.0 now on CRAN

$
0
0

This post is to announce that the latest update to AzureRMR is now available on CRAN. Version 2.3.0 brings several changes to make life easier when managing resources in Azure.

New in this version is a facility for parallelising connections to Azure, using a pool of background processes. Some operations, such as downloading many small files or interacting with a cluster of VMs, can be sped up significantly by carrying them out in parallel rather than sequentially. The code for this is currently duplicated in multiple packages including AzureStor and AzureVM; moving it into AzureRMR removes the duplication and also makes it available to other packages that may benefit. See the vignette for more details.

One side-effect of this change is that loading a current version of AzureStor or AzureVM, along with AzureRMR, will bring up a message in the R console:

> library(AzureRMR)
> library(AzureVM)
Attaching package: 'AzureVM'
The following objects are masked from 'package:AzureRMR':
    delete_pool, init_pool

Similarly, if you load the SAR package, you will receive a warning:

> library(SAR)
Warning messages:
1: replacing previous import 'AzureRMR::init_pool' by 'AzureStor::init_pool' when loading 'SAR'
2: replacing previous import 'AzureRMR::delete_pool' by 'AzureStor::delete_pool' when loading 'SAR'

These messages are because the pool functions in AzureRMR have the same names as those in the other packages. You can safely ignore them; everything will still function correctly, and I'll be submitting updated versions to CRAN in the next few days (as soon as the AzureRMR update propagates to CRAN mirrors).

Other changes in 2.3.0 include:

  • Subscription and resource group objects now have do_operation methods, like resource objects. This allows you to carry out arbitrary operations on a sub or RG, if you know the REST call.
  • AzureGraph is now a direct import, which should help ensure your credentials are consistent for Resource Manager and Microsoft Graph.
  • Error messages should now be much more informative, especially when deploying templates.

If you run into problems, or to send feedback, please open an issue at the GitHub repo.


An Update on C++/CLI and .NET Core

$
0
0

The first public release of our C++/CLI support for .NET Core 3.1 is now available for public preview! It is included in Visual Studio 2019 update 16.4 Preview 2. We would love it if you could try it out and send us any feedback you have. For more info about what this is and the roadmap going forward, check out my last post on the future of C++/CLI and .NET Core.

To get started make sure you have all the necessary components installed. C++/CLI support for desktop development is an optional component, so you will need to select it on the installer’s right pane:

Install the “Desktop development with C++” workload and be sure to include the optional “C++/CLI support” component.

You will also need the .NET Core cross-platform development workload. It installs everything you need including the .NET Core 3.1 SDK:

Alt: Install the “.NET Core cross-platform development” workload.

Creating a C++/CLI .NET Core Project

First, you will want to create a “CLR Class Library (.NET Core)” or “CLR Empty Project (.NET Code)”. The Class library template includes some additional boiler plate to set up an example class and precompiled header that may make it easier to get started. The empty project is ideal for bringing in existing C++/CLI code. Retargeting existing C++/CLI projects to .NET Core isn’t recommended.

There isn’t currently a template for C++/CLI console or Windows applications that can be used with .NET Core. Instead you must put application entry point outside of the C++/CLI code. In general, we strongly recommend keeping the C++/CLI projects as narrow in scope as possible to handle just the interoperability between .NET Core and other C++ code.

Once you create one of these projects, you can reference it from other .NET Core projects like any other class library – with one important caveat. .NET Core projects are typically architecture agnostic. You see this as the architecture “Any CPU” in the Configuration Manager and “MSIL” in the build logs. This is the default for all .NET Core projects. If you reference any C++/CLI class libraries, you must specify an explicit architecture for the non-C++ projects instead of “Any CPU”.

You can set a project’s architecture by using the Configuration Manager.

If the architectures don’t match, you will see this warning and attempting to load the C++/CLI class library will fail at runtime:

“Warning MSB3270 There was a mismatch between the processor architecture of the project being built "MSIL" and the processor architecture of the reference…”

To resolve this, make sure all projects in the solution are using the same architecture of “x86” and “Win32” or “x64”. If you are using ASP.NET Core, there is an additional consideration. Your projects also need to match the architecture of IIS Express. This is typically “x64”. If you see a “500 server error” due to the loader failing, this may be what the problem is.

Send us Feedback

Please try this out. We’d love to hear your feedback to help us prioritize and build the right features. We can be reached via the comments below or email (visualcpp@microsoft.com). You also can always send us general feedback via Developer Community.

The post An Update on C++/CLI and .NET Core appeared first on C++ Team Blog.

Top Stories from the Microsoft DevOps Community – 2019.11.01

$
0
0

Hope Halloween didn’t bring any scary production issues for you this year – but did your pumpkin’s monitor the status of your builds? As we launch into November we have a busy few weeks ahead with lots of opportunities to catch up with folks from the team at the Microsoft Ignite Conference and then QCon as well as GitHub Universe the following week. But don’t worry if you are not able to make it to Orlando or San Francisco, we put a quick video together showing some of the people in the Azure DevOps engineering team talking about what DevOps means to them. But the people behind Azure DevOps are much greater than the people working on the engineering team, our amazing community continue to ship fantastic integrations and advice every week. Here are some highlights from this week:

The Azure Readiness Checklist

Andrew from DevOpsGroup has shared his incredibly comprehensive list of what he uses and considers when engaging with clients on a new cloud project. This is a pretty exhaustive list and while you won’t need everything, going through the checklist is a great way to make sure you are explicitly excluding something for a reason (other than not knowing about it!). Andrew is taking contributions to the checklist so if you see something missing let him know.

CVE-2019-1306 Write-up from Mikhail Shcherbakov

If you’ve been following along with our latest security patches you’ll have seen details about a couple of important issues that we fixed in the latest update. But have you ever wondered about the hard work that goes into discovering these. Thankfully the awesome security researcher Mikhail Shcherbakov has posted a write-up of this vulnerability over at the Zero Day Initiative. It’s worth a read to see the fantastic work that he carried out to identify and responsibly disclose the vulnerability – thanks Mikhail!

Robot Framework GUI Testing with Azure DevOps

DevOps Testing Expert Anaïs van Asselt, from Capgemini, teaches us about running the popular Python based Robot Framework for GUI tests using Azure Pipelines. Anaïs has been doing a series of posts around this space so definitely a person to keep your eye on if this space is interesting to you.

Azure DevOps with Fortify on Demand

Fortify is a popular tool with many of our customers as it helps with end-to-end application security and testing security across the lifecycle. In this short video the Fortify on Demand team take you through how to set up Fortify on Demand with Azure DevOps and what that integration looks like in use.

What is Azure DevOps? | A DevOpsGroup Tutorial

And finally, if you are trying to share awareness of Azure DevOps inside your team or encourage folks to upgrade from older versions of TFS, the team from DevOpsGroup have put together a nicely animated 2 minute quick explainer about what Azure DevOps is that you can send along to your colleagues.

If you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.11.01 appeared first on Azure DevOps Blog.

The history of the GC configs

$
0
0

Recently, Nick from Stack Overflow tweeted about his experience of using the .NET Core GC configs – he seemed quite happy with them (minus the fact they are not documented well which is something I’m talking to our doc folks about). I thought it’d be fun to tell you about the history of the GC configs ‘cause it’s almost the weekend and I want to contribute to your fun weekend reading.

I started working on the GC toward the end of .NET 2.0. At that time we had really, really few public GC configs. “Public” in this context means “officially supported”. I’m talking just gcServer and gcConcurrent, which you could specify as application configs and I think the retain VM one which was exposed as a startup flag STARTUP_HOARD_GC_VM, not an app config. Those of you who only worked with .NET Core may not have come across “application configs” – that’s a concept that only exists on .NET, not .NET Core. If you have an .exe called a.exe, and you have a file called a.exe.config in the same directory, then .NET will look at things you specify in this file under the runtime element for things like gcServer or some other non GC configs.

At that time the official ways to configure the CLR were:

  • App configs, under the runtime element
  • Startup flags when you load the CLR via Hosting API, strictly speaking, you could use some of the hosting APIs to customize other aspects of GC like providing memory for the GC instead of having GC acquire memory via the OS VirtualAlloc/VirtualFree APIs but I will not get into those – the only customer of those was SQLCLR AFAIK.

(There were actually also other ways like the machine config but I will also not get into those as they have the same idea as the app configs)

Of course there have always been the (now (fairly) famous) COMPlus environment variables. We used them only for internal testing – it’s easy to specify env vars and our testing framework read them, set them and unset them as needed. There were actually not many of those either – one example was GCSegmentSize that was heavily used in testing (to test the expand heap scenario) but not officially supported so we never documented them as app configs.

I was told that env vars were not a good customer facing way to config things because people tend to set them and forget to unset them and then later they wonder why they were seeing some unexpected behavior. And I did see that happen with some internal teams so this seemed like a reasonable reason.

Startup flags are just a hosting API thing and hosting API was something few people heard of and way fewer used. You could say things like you want to start the runtime with Server GC and domain neutral. It’s a native API and most of our customers refused to use it when they were recommended to try. Today I’m aware of only one team who’s actively using it – not surprisingly many people on that team used to work on SQLCLR 😛

For things you could specify as app configs you could also specify them with env vars or even registry values because on .NET our internal API to read these configs always check all 3 places. While we had a different kind of attitude toward configs you could specific via app config, which were considered officially supported, implementation wise this was great because devs didn’t need to worry about which place the config would be read from – they knew if they added a new config in clrconfigvalues.h it could be specified via any of the 3 ways automatically.

During the .NET 4.x timeframe We needed to add public configs for things like CPU group (we started seeing machines with > 64 procs) or creating objects of >2gb due to customer requests. Very few customers used these configs. So they could be thought of as special case configs, in other words, the majority of the scenarios were run with no configs aside from the gcServer/gcConcurrent ones.

I was pretty wary of adding new public configs. Adding internal ones was one thing but actually telling folks about them means we’d basically be saying we are supporting them forever – in the older versions of .NET the compatibility bar was ultra high. And tooling was of course not as advanced then so perf analysis was harder to do (most of the GC configs were for perf).

For a long time folks used the 2 major flavors of the GC, Server and Workstation, mostly according to the way they were designed. But you know how the rest of this story goes – folks didn’t exactly use them “as designed originally” anymore. And as the GC diagnostic space also advanced customers were able to debug and understand GC perf better and also used .NET on larger, more stressful and more diverse scenarios. So there was more and more desire from them to do more configuration on their own.

Good thing was Microsoft internally had plenty of customers who had very stressful workloads that called for configuration so I was able to test on these stressful real world scenarios. Around the time of .NET 4.6 I started adding configs more aggressively. One of our 1st party customers was running a scenario with many managed processes. They had configed some to use Server GC and others to use Workstation. But there was nothing inbetween. This was when configs like GCHeapCount/GCNoAffinitize/GCHeapAffinitizeMask were added.

Around that time we also open sourced coreclr. The distinction of “officially supported configs” vs internal only configs was still there – in theory that line had become a little blurry because our customers could see what internal configs we had 🙂 but it also took time for Core adoption so I wasn’t aware of really anyone who was using internal only configs. Also we changed the way config values were read – we no longer had the “one API reads them all” so today on Core where the “official configs” are specified via the runtimeconfig.json, you’d need to use a different API and specify the name in the json and the name for the env var if you want to read from both.

My development was still on CLR mostly, just because we had very few workloads on CoreCLR at that time and being able to try things on large/stressful workloads was a very valuable thing. Around this time I added a few more configs for various scenarios – notable ones are GCLOHTheshold and GCHighMemPercent. A team had their product running in a process coexisting with a much large process on the same machine which had a lot of memory. So the default memory load that GC considered as “high memory load situation”, which was 90%, worked well for the much larger process but not for them. When there’s 10% physical memory left that was still a huge amount for their process so I added this for them to specify a higher value (they specified 97 or 98) which meant their process didn’t need to do full compacting GCs nearly as often.

Core 3.0 was when I unified the source between .NET and .NET Core so all the configs (“internal” or not) from .NET were made available on Core as well. The json way is obviously the official way to specify a config but it appeared specifying configs via env vars was becoming more common, especially with folks who work on scenarios with high perf requirements. I know quite a few internal and external customers use them (and have yet to hear any incidents that involved setting an env var in an undesirable fashion). A few more GC configs were added during Core 3.0 – GCHeapHardLimit, GCLargePages, GCHeapAffinitizeRanges and etc.

One thing that took folks (who used env vars) by surprise was the number you specific for a config in an env var format is interpreted as a hex number, not decimal. As far as why it was this way, it completely predates my time on the runtime team… since everyone remembered this for sure after they used it wrong the first time 😛 and it was an internal only thing, no one bothered to change it.

I am still of the opinion that the officially supported configs should not require you to have internal GC knowledge. Of course internal is up for interpretation – some people might view anything beyond gcServer as internal knowledge. I’m interpreting “not having internal GC knowledge” in this context as “only having general perf knowledge to influence the GC”. For example, GCHeapHardLimit tells the GC how much memory it’s allowed to use; GCHeapCount tells the GC how many cores it’s allowed to use. Memory/CPU usage are general perf knowledge that one already needs to have if they work on perf. GCLOHThreshold is actually violating this policy somewhat so it’s something we’d like to dynamically tune in GC instead of having users specify a number. But that’s work we haven’t done yet.

I don’t want to have configs that would need users to config things like “if this generation’s free list ratio or survival rate is > some threshold I would choose this particular GC to handle collections on that generation; but use this other GC to collect other generations”. That to me is definitely “requiring GC internal knowledge”.

So there you have it – the history of the GC configs in .NET/.NET Core.

The post The history of the GC configs appeared first on .NET Blog.

Azure services now run anywhere with new hybrid capabilities: Announcing Azure Arc

$
0
0

Enterprises rely on a hybrid technology approach to take advantage of their on-premises investment and, at the same time, utilize cloud innovation. As more business operations and applications expand to include edge devices and multiple clouds, hybrid capabilities must enable apps to run seamlessly across on-premises, multi-cloud, and edge devices, while providing consistent management and security across all distributed locations. Without coherence across these environments, cost and complexity grow exponentially. At Microsoft, we understand that hybrid cloud capabilities must evolve to enable innovation anywhere, while providing a seamless development, deployment and ongoing management experience.

Since its origin, Azure has been built to enable seamless hybrid capabilities – and we continue to deliver on our customers’ needs to enable purposeful innovation. Two years ago, we delivered Azure Stack to enable a consistent cloud model, deployable on-premises. Over the past year, we’ve extended Azure to provide DevOps for any environment and any cloud, we enabled cloud-powered security threat protection for any infrastructure, and we unlocked the ability to run Microsoft Azure Cognitive Services AI models anywhere. Today, we take a significant leap forward to enable customers to move from just hybrid cloud to truly deliver innovation anywhere with Azure.

Today, we are announcing Azure Arc, a set of technologies that unlocks new hybrid scenarios for customers by bringing Azure services and management to any infrastructure. Azure Arc is available in preview starting today.

Extend Azure management and security to any infrastructure

Hundreds of millions of Azure resources are organized, governed and secured daily by customers using Azure management. Azure Arc extends these proven Azure management capabilities to Linux and Windows servers, as well as Kubernetes clusters on any infrastructure across on-premises, multi-cloud and edge. Customers can now have a consistent and unified approach to managing different environments using robust, established capabilities such as Azure Resource Manager, Microsoft Azure Cloud Shell, Azure portal, API, and Microsoft Azure Policy. With Azure Arc, developers can build containerized apps with the tools of their choice and IT teams can ensure that the apps are deployed, configured, and managed uniformly using GitOps-based configuration management. Finally, Azure Arc makes it easier to implement cloud security across environments with centralized role-based access control and security policies. Learn more about Azure Arc.

Run Azure data services anywhere

With Azure Arc, customers can now realize the benefits of cloud innovation, including always up-to-date data capabilities, deployment in seconds (rather than hours), and dynamic scalability on any infrastructure. Customers now have the flexibility to deploy Azure SQL Database and Azure Database for PostgreSQL Hyperscale where they need it, on any Kubernetes cluster. From the Azure portal, customers get a unified and consistent view of all their Azure data services running across on-premises and clouds and can apply consistent policy, security and governance of data across environments. Customers can get limitless scale by seamlessly spinning up additional Kubernetes clusters in Azure Kubernetes Service (AKS) if they run out of capacity on-premises. Learn more about Azure data services anywhere.

“We are excited to see Microsoft bringing Azure data services and management to any infrastructure”, said Erik Vogel, Vice President for Customer Success, Hybrid Cloud Software and Services at Hewlett Packard Enterprise. “Through our partnership with Microsoft we hope to deliver a true as a Service experience across environments to help manage both the databases and the underlying infrastructure, and offer a consistent experience across on-premises and the cloud.” 

Expanded Azure Stack Hub offerings for any edge

Enterprises across 60 countries including Hong Kong Exchanges and Clearing Limited, KPMG Norway and Airbus Defense & Space are building hybrid solutions powered by Azure Stack Hub connected and disconnected from Azure. Today, we are expanding our Azure Stack Hub portfolio to offer customers even more flexibility with the addition of Azure Stack Edge. Azure Stack Edge is a managed AI-enabled edge appliance that brings compute, storage and intelligence to any edge. Customers will be able to take advantage of new capabilities including Virtual Machine support, a GPU based form factor, high availability with multiple nodes, and multi-access edge compute (MEC). We are also introducing a new rugged series of Azure Stack Hub form-factors designed to provide cloud capabilities in the harshest environment conditions supporting scenarios such as tactical edge, humanitarian and emergency response efforts.

Azure hybrid innovation anywhere infographic

We look forward to sharing even more updates on our innovation in hybrid at Microsoft Ignite this week. To learn more about our Azure hybrid offerings, visit the Azure hybrid overview page. You can also register for our upcoming webinar that will walk through key Azure hybrid capabilities including Azure Arc.


Azure. Invent with purpose.

Empowering developer velocity with the most complete toolchain

$
0
0

Today every company is a software company. Across all industries from retail to healthcare to financial services and more, software is at the heart of every company’s strategy. According to a recent study by ISACA, 91 percent of business leaders saw digital transformation as a way of sparking innovation and finding efficiencies for their organizations.

A key catalyst for digital transformation is developers. Developers are the builders of our era, creating the ideas and writing the code that enables digital transformation for organizations around the world. To become a digital company, every company must build a culture that empowers developers to achieve more.

Organizations that successfully empower developers realize developer velocity, enabling developers to create more, innovate more, and solve more problems. Developer velocity is not just about speed, but about unleashing developer ingenuity, turning developers’ ideas into software with speed and agility to support the needs of your customers and the business.

Developer velocity means enabling developers to:

  • Build productively
  • Collaborate globally and securely
  • Scale innovation

Microsoft is committed to delivering solutions designed for developers and development teams to support your digital transformation journey in each of these areas, so you can innovate with purpose.

Build productively

Microsoft’s developer DNA is expressed through our tools, enabling developers to be more productive without changing the way you work while exposing you to technologies, such as Kubernetes, AI, and DevOps, along the way. With support for every language and framework, developers can build on your terms, and deploy where you want.

Our mission with Visual Studio is to provide tools for every developer and today, according to a recent survey from Stack Overflow, Visual Studio Code and Visual Studio are the most popular development environments and tools used across the developer ecosystem. But we’re not stopping here. We know from talking with developers every day that software development is a constantly evolving craft. The way developers work is changing and we’re investing in tools that reflect modern workflows and practices.

For example, IntelliCode uses AI to bring the knowledge of the open source community into your code editor as you type. IntelliCode can suggest completions for whole lines of code. It can help simplify repetitive and tedious tasks like code refactoring. It can even help propagate best practices across your whole development team.

One of the biggest pain points in the developer’s job is to set up a new dev box. Whether you’re onboarding to a new team, starting a new project, or switching between tasks across different codebases, developers can spend hours setting up development environments. To help developers focus on what matters, today we’re announcing the preview of Visual Studio Online, which leverages the power of the cloud to make it easy to create and share dedicated development environments on-demand. You can create a pre-configured, isolated environment for each project, each repo, each task—in minutes. It doesn’t use any local resources and is accessible from any device. Visual Studio Online is now available for Visual Studio Code in preview and Visual Studio in preview. To learn more and sign up for the preview, view the announcement blog post.

Collaborate globally and securely

Software development is a team sport, and collaboration with peers and knowledge sharing within the team is fundamental. And, the increased pressure to continuously innovate challenges teams to move with more agility to redefine software delivery processes and to breakdown silos between development and operations.

At Microsoft, we know these challenges well as we too had to transform. We understand that the adoption of DevOps is an ongoing journey that requires a culture change and that change can be hard. As our customers walk a similar path, we want to help you realize the benefits we have seen from this transformation. We’re excited to share our experiences and learnings through the DevOps journey stories of Microsoft teams who have changed the way they work and have enabled this transformation with the support of technology.

We also know that developers solve problems with the support of the community both within and outside of your organizational boundaries. Last year, Microsoft completed the acquisition of GitHub, the home of open source and the largest developer communities on the planet, with over 40 million developers. GitHub transformed collaboration with a git hosted solution focused on community, creating the home where developers come together and work together.

Open source has also become instrumental in accelerating innovation. According to a recent report by Synopsys, 99 percent of codebases with over 1,000 files contain open source components. While this enables developers to innovate with speed, this also introduces new responsibilities like how to create and consume open source in a secure and trusted way. With GitHub, developers have tools, best practices, and infrastructure to help make software development secure. For example, developers get automatic security fixes for dependencies in your projects. GitHub’s recent acquisition of Semmle, a semantic code analysis engine, allows developers to detect vulnerabilities as part of your developer workflows to prevent vulnerabilities before they are ever released.

Finally, Microsoft is building integrations to GitHub making the developer experience seamless. Visual Studio Code’s integration with GitHub pull requests makes it easy to review source code inside the editor, where it was written. Developers can connect your GitHub repositories to Azure Boards to use kanban boards, backlogs, and dashboards for flexible work tracking. We’ve built upon GitHub Actions with GitHub Actions for Azure to make it easy to deploy to Azure environments such as Azure App Service and Azure Kubernetes Service.

Scale your innovation

Sparking innovation to enhance customer experiences and line-of-business applications is top of mind for every business leader. Whether your company is building web, mobile, IoT, or mixed reality experiences, innovation is key to the future success of your organization.

Microsoft Azure offers over 100 services that help your organization drive and scale innovation to achieve your business outcomes. Developers have the freedom to create and run applications on a massive, global network using your preferred tools and frameworks. More and more, our customers are turning to Azure serverless technologies to build cloud-native applications designed to respond quickly to market signals, reduce costs, and move faster throughout the development cycle. Direct.One, Maersk, and Shell rely on Azure serverless and fully managed services to delight customers every day. Today, more than two million applications run on the Azure serverless platform.

Today, we’re announcing the general availability of serverless capabilities to better serve the needs of our customers. With PowerShell support for Azure Functions, operations teams can now set up serverless automation processes and take advantage of the event-driven programming model for infrastructure management and scripting tasks across Azure and hybrid environments. To make serverless a real design choice for the most demanding and mission-critical applications, the Azure Function Premium plan makes cold start a thing of the past. It allows for more powerful hardware, increased control on the minimum and maximum number of instances for more predictable costs, and the ability of pre-warming resources for optimal performance.

Containers and Kubernetes are central to cloud-native application patterns. Forrester recently recognized Azure as a leader for enterprise container platforms, offering the strongest developer experience and global reach. To further support the development of mission-critical workloads with strenuous requirements around reliability and scalability, today we’re announcing the general availability of Azure Kubernetes Service (AKS) support for availability zones, cluster-level autoscaling, multiple node pools, and a preview of Azure Security Center integration for Azure Kubernetes Service for container image vulnerability assessment and Kubernetes cluster threat protection. To learn more about these capabilities and more Azure Kubernetes Service innovations announced today, check out all of the Azure updates. And, to simplify containerized application development for Java developers, we are announcing the preview of Azure Spring Cloud built, operated, and supported in partnership with Pivotal. Azure Spring Cloud is built on top of Azure Kubernetes Service and abstracts away the complexity of infrastructure management and Spring Cloud middleware management.

To realize innovation goals, organizations need to focus on and scale developers’ investments. According to a recent survey by Indeed, over 86 percent of organizations struggle to hire all of the technical talent needed to build applications. Microsoft Power Apps, a low-code tool for citizen developers, expands the pool of people empowered to build applications. With the combination of Power Apps and Azure, citizen developers can easily build business apps that can be centrally managed through IT and easily extended by developers using Azure Functions or APIs to scale innovation across your organization.

Developers are the key to your digital transformation. Empowering developers with the latest technologies and tools is critical to the future success of your organization. Today’s announcements highlight Microsoft’s commitment to ensure every developer has cutting-edge tools to create the next generation of applications and drive innovation with developer velocity.

We have even more to share at Microsoft Ignite. Be sure to tune into Scott Hanselman’s keynote at 9:00 AM EST on Tuesday, November 6th to learn how to build an application in Azure using your language of choice such as Java, PHP, Node.js, .NET, or Python. Make sure to download the code and have fun defeating our bot after the session!


Azure. Invent with purpose.

Simply unmatched, truly limitless: Announcing Azure Synapse Analytics

$
0
0

Today, businesses are forced to maintain two types of analytical systems, data warehouses and data lakes. Data warehouses provide critical insights on business health. Data lakes can uncover important signals on customers, products, employees, and processes. Both are critical, yet operate independently of one another, which can lead to uninformed decisions. At the same time, businesses need to unlock insights from all their data to stay competitive and fuel innovation with purpose. Can a single cloud analytics service bridge this gap and enable the agility that businesses demand?

Azure Synapse Analytics

Today, we are announcing Azure Synapse Analytics, a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs.

A diagram showing how Azure Synapse Analytics connects Power BI, Azure Machine Learning, and your ecosystem.

Simply put, Azure Synapse is the next evolution of Azure SQL Data Warehouse. We have taken the same industry-leading data warehouse to a whole new level of performance and capabilities. In fact, it’s the first and only analytics system to have run all TPC-H queries at petabyte-scale. Businesses can continue running their existing data warehouse workloads in production today with Azure Synapse and will automatically benefit from the new capabilities which are in preview. Businesses can put their data to work much more quickly, productively, and securely, pulling together insights from all data sources, data warehouses, and big data analytics systems. Partners can continue to build with us as Azure Synapse will offer a rich and vibrant ecosystem of partners like Databricks, Informatica, Accenture, Talend, Attunity, Pragmatic Works, and Adatis.

With Azure Synapse, data professionals of all types can collaborate, build, manage, and analyze their most important data with ease, all within the same service. From Apache Spark integration with the powerful and trusted SQL engine to code-free data integration and management, Azure Synapse is built for every data professional.

That is why companies like Unilever are choosing Azure Synapse.

"Our adoption of the Azure Analytics platform has revolutionized our ability to deliver insights to the business. We are very excited that Azure Synapse Analytics will streamline our analytics processes even further with the seamless integration the way all the pieces have come together so well."

Nallan Sriraman, Global Head of Technology, Unilever
Unilever Logo

Limitless scale

Azure Synapse delivers insights from all your data, across data warehouses and big data analytics systems, with blazing speed. With Azure Synapse, data professionals can query both relational and non-relational data at petabyte-scale using the familiar SQL language. For mission-critical workloads, they can easily optimize the performance of all queries with intelligent workload management, workload isolation, and limitless concurrency.

Powerful insights

With Azure Synapse, enabling business intelligence and machine learning is a breeze. It is deeply integrated with Power BI and Azure Machine Learning to greatly expand the discovery of insights from all your data and apply machine learning models to all your intelligent apps. Significantly reduce project development time for business intelligence and machine learning projects with a limitless analytics service that enables you to seamlessly apply intelligence over all your most important data — from Dynamics 365 to Office 365, to SaaS services that support Open Data Initiative — and easily share data with just a few clicks.

Unified experience

Build end-to-end analytics solutions with a unified experience. The Azure Synapse studio provides a unified workspace for data prep, data management, data warehousing, big data, and AI tasks. Data engineers can use a code-free visual environment for managing data pipelines. Database administrators can automate query optimization. Data scientists can build proofs of concept in minutes. Business analysts can securely access datasets and use Power BI to build dashboards in minutes, all while using the same analytics service.

Unmatched security

Azure has the most advanced security and privacy features in the market. These features are built into the fabric of Azure Synapse, such as automated threat detection and always-on data encryption. And for fine-grained access control, businesses can help ensure data stays safe and private using column-level security and native row-level security, as well as dynamic data masking to automatically protect sensitive data in real-time.

 

Get started today

Businesses can continue running their existing data warehouse workloads in production today with generally available features on Azure Synapse.


Azure. Invent with purpose. 

Companies of all sizes tackle real business problems with Azure AI

$
0
0

There are incredible transformations happening across industries through the application of AI. We have a front row seat with customers who are successfully digitizing core business processes and creating more engaging and personalized customer experiences. With Microsoft’s AI platform, Azure AI, our vision continues to center on helping our customers innovate with purpose, using productive, enterprise-scale, secure solutions. This vision is made stronger by recent partnerships like our investment in OpenAI to develop a hardware and software platform that extends Microsoft Azure capabilities in large-scale AI systems.

Today, through a number of AI innovations, we continue making it easier for organizations to adopt and apply AI in a way that meets their needs, where ever they are in their AI journey. Product updates include new capabilities in Microsoft Azure Machine Learning that boost the productivity of developers and data scientists of all skill levels, new innovations in Microsoft Azure Cognitive Services and Microsoft Azure Bot Service to simplify the creation of AI apps and agents, and new enhancements to Azure Cognitive Search to enable the development of knowledge mining applications.

Tremendous customer momentum

We are humbled by the tremendous adoption of Azure AI. Organizations large and small have adopted Azure AI solutions to deploy AI at scale and build with confidence knowing that they own and control their data. With our proven AI technologies, customers like Novartis, Humana, and UPS, as well as others across sectors like manufacturing, retail, aerospace, and animal conservation are deploying Azure AI services to drive meaningful outcomes with AI at scale.

We’re pleased to share that Azure AI now has more than 20,000 active paying customers – and more than 85 percent of Fortune 100 companies have used Azure AI in the last 12 months. In addition, Azure AI customers run over 1 million machine learning experiments per month, use Azure Cognitive Search to process over 6 billion documents per day, run over 5 billion cognitive services transactions per month, and process over 1 billion bot messages per month.

Accelerating machine learning adoption

Our new Microsoft Azure Machine Learning capabilities including the new machine learning designer, automated machine learning enhancements and built-in notebooks are designed to meet the needs of data scientists and developers of all skill levels. New machine learning operations (MLOps) capabilities help data-science and IT teams better collaborate and increase the pace of model deployment with more governance and control. We continue to invest in open ecosystems with support for R and availability of ONNX Runtime 1.0 which simplifies the process of optimizing machine learning models to run a variety of chipsets. You can get started for free with Azure Machine Learning today.

Customers like Schneider Electric are using Azure Machine Learning to significantly minimize worker risk, save time, and lower costs. By leveraging the automated machine learning capabilities within Azure Machine Learning, Schneider Electric reduced the process of identifying the right models for predictive maintenance, from one month to one day.

“All the data scientists on our team enjoy using Azure Machine Learning service. Why? Because it’s fully interoperable with all the other tools they use in their day-to-day work, no extra training is needed, and they get more done faster now.” —Matthieu Boujonnier: Analytics Application Architect and Data Scientist, Schneider Electric

Lexmark is using Azure Machine Learning to glean valuable insights from the data it collects from millions of IoT-enabled printers and make more informed business decisions.

“Our Connected Field Service takes data from our Lexmark IoT Hub, augmented by Azure Machine Learning, and feeds information into Dynamics 365, so we can make predictive diagnostics for individual machines and alert service technicians to be ready.”—Brad Clay, Senior Vice President, Chief Information and Compliance Officer, Lexmark International

The wildlife crime team from Africa’s Peace Parks Foundation (PPF), in partnership with the South African conservation agency Ezemvelo KZN Wildlife, is using Azure Machine Learning to monitor and prevent rhino poaching.

Simplifying the development of intelligent apps and agents

Azure Cognitive Services are a comprehensive set of domain-specific, ready-to-use, AI models. Today, we are announcing the general availability of a new Azure Cognitive Service called Personalizer, the industry's first AI service based on reinforcement learning. Personalizer allows businesses to create rich customer interactions by prioritizing the most relevant content and experiences in each customer interaction. New Speech service capabilities are available in preview, including Custom Neural Voice which enables customers to create branded voices using deep neural networks, and the ability to use Office 365 data to automatically create optimized custom speech models. Updates to Text Analytics include the ability to detect and extract personally identifiable information in documents and expanded entity type support for more than 100 named entity types. A new Bot Framework Composer helps simplify the creation of bots through a graphical user interface. You can get started for free with Cognitive Services and Bot Service.

Customers like the European aerospace manufacturer, Airbus, are using Azure Cognitive Services to provide predictive maintenance for mixed aircraft fleets:

“Innovation has always been a driving force at Airbus. Using Anomaly Detector, an Azure Cognitive Service, we can solve some aircraft predictive maintenance use cases more easily.” —Peter Weckesser, Digital Transformation Officer, Defence and Space, Airbus

Spotify is making it easier for anyone to create podcasts with their Soundtrap for Storytellers application. Using Speech Service, Spotify is helping content creators streamline the entire podcast editing process by auto-transcribing podcasters’ audio tracks and allowing them to edit directly within the transcribed document.

In hospitality, Caesars Entertainment, which operates brands including Harrah’s, Caesars, and Horseshoe, is using Azure Bot Service to deploy a text message bot to answer users’ questions.

Uncovering latent insights from content with knowledge mining

Azure Cognitive Search, formerly known as Azure Search, is the only cloud search service with built-in AI capabilities that enables you to discover patterns and relationships in your content, understand the sentiment, extract key phrases and more. Updates to Azure Cognitive Search, including new data connectors, additional built-in AI skills, and expanded region availability, make it easier for enterprises to build knowledge mining applications that ingest, enrich, and search structured and unstructured information, influencing better business decisions. You can get started for free with Azure Cognitive Search.

The Atlantic is using Azure AI to catalog and preserve 160 years of published history. Leveraging Azure Cognitive Search, the publication is transitioning from hard copy to a digital system where its archives can be explored by the public as well as used as a resource for writers to build connections between stories and enrich their content.

Archive360, an intelligent information management solution provider, uses Azure Cognitive Search to enable their customers to ask complex questions of petabyte-sized archive datasets both quickly and cost-effectively.

"By using Azure Cognitive Search to provide customers with the search performance and simplicity they need, we can deliver deeper data insights than ever before."—Tibi Popp, Chief Technology Officer, Archive 360

autoTRADER.ca, which serves more than 5 million Canadians monthly in the market for a new or used car, has been using Azure Cognitive Search to launch new growth opportunities including a dealer-to-dealer auction site, as well as plans to replace their old search engine with a more cost-effective, scalable, improved search experience for their consumer marketplace.

“Azure Cognitive Search enabled us to launch the dealer auction site. We couldn’t have been able to do it otherwise, and we’re really excited about using Azure Cognitive Search for the marketplace. It gives us an opportunity to provide better and better services to our customers with instant, seamless experiences across all devices.”—Allen Wales: Vice President of Technology, autoTRADER.ca

While we’re pleased to see start-ups to enterprise companies adopting Azure AI, we remain focused on addressing barriers that hinder companies’ ability to take advantage of all the benefits of AI.

Stay tuned for more updates in the coming months, we’ll have some exciting things to share. In the meantime, we look forward to helping you and your company explore how you can tackle your hardest business problems with the power of Microsoft’s AI platform, Azure AI. Get started with a free trial of Azure AI today. 


Azure. Invent with purpose.


Announcing .NET Core 3.1 Preview 2

$
0
0

Today, we’re announcing .NET Core 3.1 Preview 2. .NET Core 3.1 will be a small and short release focused on key improvements in Blazor and Windows desktop, the two big additions in .NET Core 3.0.. It will be a long term support (LTS) release with an expected final ship date of December 2019.

You can download .NET Core 3.1 Preview 2 on Windows, macOS, and Linux.

ASP.NET Core and EF Core are also releasing updates today.

Visual Studio 16.4 Preview 3 and Visual Studio for Mac 8.4 Preview 3 are also releasing today. They are required updates to use .NET Core 3.1 Preview 1. Visual Studio 16.4 includes .NET Core 3.1, so just updating Visual Studio will give you both releases.

Details:

Improvements

The biggest improvement in this release is support for C++/CLI (AKA “managed C++”). The changes for C++/CLI are primarily in Visual Studio. You need to install the “Desktop development with C++” workload and the “C++/CLI support” component in order to use C++/CLI. You can see this component selected (it is the last one displayed) in the image below.

This component adds a couple templates that you can use:

  • CLR Class Library (.NET Core)
  • CLR Empty Project (.NET Core)

If you cannot find them, just search for them in the New Project dialog.

Closing

The primary goal of .NET Core 3.1 is to polish the features and scenarios we delivered in .NET Core 3.0. .NET Core 3.1 will be a long term support (LTS) release, supported for at least 3 years.

Please install and test .NET Core 3.1 Preview 1 and give us feedback. It is not yet supported or recommended for use in production.

If you missed it, check out the .NET Core 3.0 announcement from last month.

The post Announcing .NET Core 3.1 Preview 2 appeared first on .NET Blog.

All Things Developer Tools at Microsoft Ignite

$
0
0

There is a lot of developer goodness happening at Ignite this week. Visual Studio Online is available as a public preview for developers to try cloud hosted development environments with your tool of choice. Visual Studio 2019 version 16.4 Preview 3 and Visual Studio 2019 for Mac version 8.3 Preview 2 just released with tons of new productivity features. There are also a bunch of great sessions that deep dive into all this and much more.

Welcome to the family Visual Studio Online

Today Visual Studio Online moved to a public preview; providing managed, on-demand development environments for long-term projects, to quickly prototype a new feature, or for quick tasks like reviewing pull requests. You can work with environments from any device using Visual Studio Code, Visual Studio 2019 or the built-in browser-based editor. The features for connecting to a cloud environment from the Visual Studio 2019 IDE are available in private preview. Read more about the exciting announcement in the Visual Studio Online blog post.

The Visual Studio 2019 IDE can also create and connect to Visual Studio Online environments and take advantage of all the benefits of a cloud powered development environment to build any application.

  • Get up and running quickly by letting Visual Studio Online install runtimes, SDKs and dev tools
  • Create environments quickly to isolate your work across different projects
  • Spin up extra capacity with a premium environment to gives you the memory and compute to run your toughest workloads
  • Give your teammates a ready to go environment for rich code reviews without tying up your dev box or asking anyone to setup their own environment

We’re eager to invite developers that love working in a full IDE to join our private preview.

Visual Studio 2019 version 16.4 Preview 3 is here

Preview 3 of Visual Studio 2019 version 16.4 is now available. If you aren’t already running the Preview builds you can download and try out the latest side by side with our current version of Visual Studio. Of course, if you already have a Preview installed just click the notification or go to Help menu and click Check for updates.

Speedy code navigation

In 16.3 we introduced file search for all languages and semantic code search for C# and VB into the new Search bar. In 16.4 we have rewritten the Find in files tool window to address suggestions and feedback on the most popular search control in the IDE. Performance has been significantly improved and reliability issues that previously led to frustrating hangs have been fixed. We cleaned up the UI and added a few improvements like adding exclusions to a search query.

IntelliSense without Using Directives

IntelliSense is indispensable for browsing and finding members of a type but if you’re missing a Using directive you might not find what you need. In 16.4 IntelliSense shows members and types for any assemblies referenced from your project and automatically adds the Using directive to keep your code neat and tidy.

IntelliCode helps with argument completion

IntelliCode saves you time by putting what you’re most likely to use at the top of your IntelliSense completion list. IntelliCode can also provide suggestion for arguments as you type, bringing the most likely arguments to the top of your completion list. Set your cursor on a method and type CTRL+SPACE to give it a try.

Automatically re-train and acquire IntelliCode team models

Since we released the ability to create team models trained on your own code last year, we’ve heard feedback that sharing team models through links can be cumbersome and it’s difficult to remember to retrain your models. With our new Azure DevOps task and automatic acquisition of models, you can set up your pipeline to take care of team model updates and automatically share the model with others working in the same repository.

Refactoring with help from IntelliCode

ItelliCode goes beyond smarter IntelliSense. IntelliCode learns from your edits and provides refactoring suggestions for repeated edits as you type. IntelliCode understands the syntactic structure of your changes so suggestions include locations with similar structure but different variables and formatting. These suggestions appear with your other refactoring quick fixes and are available from the CTRL + ”.” shortcut. Remember to enable this early IntelliCode feature from Tools Options.

More real estate for code and easier navigation with vertical document tabs

We’re excited to include with 16.4 the most popular customer suggestion, vertical document tabs. Real estate for code is a premium in many environments. Vertical document tabs give vertical space to your source code while better utilizing your horizontal screen space. Stretch to bring long file names into view, sort tabs alphabetically and use tab groups to get more code on the screen.

XAML code editor pop up, merge resource dictionaries and more

In this release there are multiple new features for desktop developers building WPF or UWP applications. One such feature is the ability to open the XAML code editor window separately from the XAML designer using our new “pop up” button next to XAML tab:

Other features include the ability to easily merge an existing resource dictionary into your application with our new solution explorer command “Merge Resource Dictionary Into Active Window”, the ability to filter Live Visual Tree to “Just My XAML” and more. For a complete list of what’s new for desktop developers see the release notes.

Audio calls and app sharing for desktop apps in Live Share

Real time collaboration with Live Share opens the door for pair programming, rich code reviews, and help from experts even when they are remote. A quick call can often provide more context behind the code. Now the Live share tool window lets you start an audio call with other collaborators in a Live Share session.

You can now App Share a desktop and web apps in a Live Share conversation. Start a Live Share conversation with another developer or tester and start debugging. Guests will see the same running app you see on your local machine. They can even interact with the application and trigger breakpoints in the debugger.

Pin Properties in the Debugger

Identifying objects by their properties while debugging has just become easier and more discoverable with the new Pinnable Properties tool. In short, hover over a property you want to display in the debugger window of the Watch, Autos, and Locals windows, click that pin icon, and immediately see the information you are looking for at the top of your display!

Visual Studio 2019 for Mac version 8.3 Preview

Today we’re releasing Visual Studio 2019 for Mac version 8.4 Preview 2. This is an exciting release because it adds significant accessibility improvements to the overall IDE as well as support for .NET Core 3.1 Preview and full Blazor (server-side) support. With the latest version you can create, build, debug and run Blazor projects and then deploy your Blazor app directly to Azure without ever leaving the IDE. Learn more about this release on the latest Visual Studio for Mac blog post.

Ignite sessions to check out this week

The above is just a glimpse of the developer topics that will be covered at Microsoft Ignite this week. Make sure you head on over to https://www.microsoft.com/ignite to catch the following sessions and more. Times listed are in Eastern Time (ET).

Monday, Nov 4
2:00 PM – 2:45 PM
Empowering every developer to innovate with Microsoft Azure
Monday, Nov 4

3:15 PM – 4:00 PM

Increase your .NET productivity with Visual Studio and Visual Studio for Mac
Tuesday, Nov 5

9:00 AM – 10:15 AM

Keynote: App development for everyone with Hanselman and friends
Tuesday, Nov 5

10:30 AM – 11:15 AM

Visual Studio Online: A look at the future of developer productivity and collaboration
Tuesday, Nov 5

11:45 AM – 12:30 PM

Visual Studio Code tips and tricks
Tuesday, Nov 5

1:00 PM – 1:45 PM

The now and then of cloud native application in the enterprise using containers
Tuesday, Nov 5

2:15 PM – 3:00 PM

Ship it! Build for any platform with Azure Pipelines, and make shipping fun and stress-free
Tuesday, Nov 5

2:15 PM – 3:00 PM

Building serverless web applications in Azure
Tuesday, Nov 5

3:30 PM – 4:15 PM

Enterprise-grade Node.js on Azure
Tuesday, Nov 5

4:30 PM – 5:15 PM

Being a social developer
Wednesday, Nov 6

9:15 AM – 10:00 AM

Building enterprise capable serverless applications
Wednesday, Nov 6

10:30 AM – 11:15 AM

Create amazing web apps with ASP.NET Core
Wednesday, Nov 6

12:30 PM – 1:45 PM

.NET platform overview and roadmap
Wednesday, Nov 6

12:45 PM – 1:30 PM

Windows App Development Roadmap: Making Sense of WinUI, UWP, Win32, .NET
Wednesday, Nov 6

2:15 PM – 3:00 PM

Community powered continuous integration with GitHub Actions
Thursday, Nov 7

9:15 AM – 10:00 AM

Applying best practices to Azure Kubernetes Service (AKS)
Thursday, Nov 7

10:30 AM – 11:15 AM

Debugging tips and tricks in Visual Studio 2019
Thursday, Nov 7

12:45 PM – 1:30 PM

Moving the web forward: Microsoft Edge for web developers
Thursday, Nov 7

1:00 PM – 1:45 PM

Cloud native applications with .NET Core and Azure Kubernetes Service
Thursday, Nov 7

2:15 PM – 3:00 PM

Build Python apps in Azure faster with Visual Studio Code
Thursday, Nov 7

3:30 PM – 4:15 PM

Mobile app development reimagined with Xamarin and .NET
Friday, Nov 8

9:15 AM – 10:00 AM

.NET Microservices with Azure Service Fabric: A real-world perspective
Friday, Nov 8

10:30 AM – 11:15 AM

Linux based web app development made easy on App Service
Friday, Nov 8

11:45 AM – 12:30 PM

Build a highly secure and scalable mobile backend using App Center

If you’re at the event in Orlando this week, be sure to stop by the Development & Architecture Center to chat with our team and catch one of the many theater and lightning talks. There are also hands-on workshops throughout the week for you to experience these technologies first-hand.

Thanks,
Anthony & the entire Visual Studio team

The post All Things Developer Tools at Microsoft Ignite appeared first on Visual Studio Blog.

Navigate data protection and risk in the cloud era

From new Microsoft Teams experiences to the all-new Project Cortex—here’s what’s coming soon to Microsoft 365

Visual Studio 2019 for Mac version 8.4 Preview 2, now available

$
0
0

Today we released the latest preview, Preview 2, of Visual Studio 2019 for Mac version 8.4. This preview comes with several exciting new features which we would love for you to try out. To get the preview:

Updates in this preview

The focus of this preview is around accessibility improvements and .NET Core and ASP.NET Core. Let’s dive into the details of the updates.

Accessibility Enhancements

Ensuring Visual Studio for Mac can be used by all users is important to us and we realize the need to support various assistive technologies to make this happen. Visual Studio for Mac previously had some built-in accessibility features compatible with VoiceOver and other assistive technologies. With the release of Preview 2, we’ve increased the surface area of the IDE accessible by assistive services to include several commonly used parts that were previously inaccessible.

Those using assistive technologies will find general improvements over the entire IDE that include focus order, contrast, reduction of keyboard traps, more accurate VoiceOver navigation and reading, and more. We’ve also rewritten the UI for the debugger to make it accessible with VoiceOver.

Improving accessibility of Visual Studio for Mac is a top priority for our team. While we have made rapid progress in this area recently, we are looking for some real-world users to assist in guiding the work. Try this preview and reach out to us to let us know what scenarios are working well and what and are not. If you would like to directly engage with us on our accessibility work, please email Dominic Nahous, the lead PM for the initiative at dominicn@microsoft.com. Now let’s move on to discuss the .NET Core specific updates.

.NET Core 3.1 Preview support

In this release, we have added support for the latest preview of the .NET Core 3.1 SDK Preview 2. When you install the preview version of the IDE, that version of the .NET Core SDK will be installed automatically. We have full support for .NET Core 3.1 Preview 2 projects including: creating new projects, editing, building, debugging and other features.

ASP.NET Core Blazor Server Support

In this release we are adding support for developing and publishing ASP.NET Core Blazor Server applications. If you haven’t heard of Blazor, it’s a framework for building interactive client-side web UI with .NET. Here are some of the advantages of using Blazor.

  • Write code in C# instead of JavaScript.
  • Leverage the existing .NET ecosystem of .NET libraries.
  • Share app logic across server and client.
  • Benefit from .NET’s performance, reliability, and security.
  • Stay productive with Visual Studio 2019 on PC, Linux, and macOS.
  • Build on a common set of languages, frameworks, and tools that are stable, feature-rich, and easy to use.

In Visual Studio 2019 for Mac 8.4 Preview 2 you can create new Blazor server projects as well as get the standard support you would expect such as building, running and debugging Blazor projects. As you can see, the Blazor Server App project template is now available in the New Project dialog.

One of the areas where the team has focused on this release was adding support for editing .razor files. These are the files that you’ll be using when creating Blazor applications. If you’ve edited these files in the Windows version of Visual Studio 2019, then you’ll be very comfortable in Visual Studio 2019 for Mac. Both the Windows and Mac version of the IDE share the same editor for .razor files. You’ll see full colorization and completion support for your .razor files including completions for Razor components declared in the project.

vsmac blazor editor

You can also publish Blazor applications directly to Azure App Service. And if you don’t have an Azure account to run your Blazor app on Azure, you can always sign up for a free one here that also comes 12 months of free popular services, $200 free Azure credits, and over 25 always free services.

Updates to the editing experience

vsmac blazor editor

As mentioned before, the editor in Visual Studio for Mac now supports full colorization, IntelliSense and completion for .razor files. In addition to adding Blazor support, we’ve been hard at work adding features that have been top requests from our community. The biggest change that you will notice is that we brought back preview boxes for any code changes that may occur from a code fix or analysis suggestion. In the screenshot below, we see a preview of the changes that will occur if I use the “Make Static” codefix provided by Roslyn.

vsmac csharp editor

To celebrate the new preview view, we are also providing several new code fixes like the aforementioned “Make Static” as well as the ability to add null checks to each parameter of a method.

Finally, you may have noticed in the screenshots that the coloring looks more like what you may be used to on Visual Studio for PC. We’ve been working to standardize the Visual Studio theme, and we will be making more progress in this area in the releases ahead, so stay tuned!

Pack support for .NET Core library projects

When creating .NET Core class libraries, you may be interested in distributing your library to a larger audience. To do this you need to create a NuGet package from your class library. In Visual Studio for Mac we made it very easy to create a NuGet package from a .NET Core library project. You right-click your project and then select the Pack menu option as per the example below:

After invoking the Pack menu option for a library project, you will find the NuGet package (.nupkg file) in the output folder. This experience is consistent with that in Visual Studio on PC.

Download and try today

If you haven’t already make sure to download Visual Studio 2019 for Mac and then switch to the preview channel. With this release we are hoping that you’ll be able to easily get started with .NET Core 3.1 as well as Blazor Server applications. We encourage you to leave your comments either below in this post or by submitting issues to the developer community via Report a Problem.

If you’re interested in upcoming releaseses, you’ll be happy to know that we have recently updated the Visual Studio 2019 for Mac Roadmap so please take a look and let us know your thoughts.

Make sure to follow us on Twitter at @VisualStudioMac and reach out to the team. Customer feedback is important to us and we would love to hear your thoughts. Alternatively, you can head over to Visual Studio Developer Community to track your issues, suggest a feature, ask questions, and find answers from others. We use your feedback to continue to improve Visual Studio for Mac 2019, so thank you again on behalf of our entire team.

The post Visual Studio 2019 for Mac version 8.4 Preview 2, now available appeared first on Visual Studio Blog.

Getting your sites ready for the new Microsoft Edge

$
0
0

This morning, we released Microsoft Edge Beta version 79, which is the final Beta before the new Microsoft Edge is generally available, also known as the “Release Candidate.” On January 15th, we expect to release the “Stable” channel, at which point Microsoft Edge will be generally available to download on Windows and macOS.

The new Microsoft Edge is built on the Chromium engine, providing best in class compatibility with extensions and web sites, with great support for the latest rendering capabilities, modern web applications, and powerful developer tools across all supported platforms.

For Enterprise customers, the new Microsoft Edge also includes Internet Explorer mode, providing a seamless experience across internal sites and LOB apps with legacy dependencies. And for end users, it includes new privacy-enhancing features like tracking prevention that’s on by default and a new InPrivate mode across your entire web experience, so your online searches and browsing are not attributed to you.

You can learn more about how the new Microsoft Edge and Bing work together to be the browser and search engine for business over on the Windows blog. In this post, we’ll share more about how you can add the new Microsoft Edge to your automated browser testing, so your customers have a great experience as they begin to upgrade. We’ll also share resources you can use to file bugs, get support, and see what’s next for the new Microsoft Edge.

Microsoft Edge Insider Channels

Microsoft Edge has multiple channels that you can get started testing today: Canary, Developer, and Beta. Each of these channels has differing levels of support for experimental features, and therefore each has its own level of risk regarding stability.
Logos for Microsoft Edge Canary, Dev, and Beta channels
In general, we recommend testing on the Developer channel as a good balance between Canary (which is essentially untested bits that are built every night) and Beta, which contains six weeks’ worth of changes. The Developer channel may be less stable than Beta but allows developers to experiment and prototype against early bits.

For customers looking for a snapshot of what is coming in the next major version, the Beta channel represents an early preview of the next Stable release. For example, today’s Beta 79 is our “Release Candidate” build for our Stable release on January 15th. To install the browser, simply browse here and select the appropriate channel.

Automated testing

Because the new Microsoft Edge is built on Chromium, it is fully compatible with popular automated testing frameworks like Selenium WebDriver and Puppeteer. With general availability coming in January, we recommend incorporating the new Microsoft Edge into your existing automated tests now – testing the Beta channel will give you six weeks advance notice of any potential issues that may impact your site.

Selenium WebDriver

The most common framework for browser automation is Selenium WebDriver. To configure WebDriver with Microsoft Edge, you’ll need to download the corresponding version of our WebDriver, MSEdgeDriver. So, for example, if you downloaded the Developer channel for Microsoft Edge, you would want to click on the Settings and More link in the browser and then click on “Settings”. From there, you can click on “About Microsoft Edge” and see your Version. It will say something like “79.0.308.0”. Once you know that, you can download the matching version of MSEdgeDriver that is appropriate for your Operating System.

If you prefer to automate that process, you can check the following registry key for the version of Microsoft Edge that is installed:

HKEY_CURRENT_USERSoftwareMicrosoftEdge{ CHANNEL}BLBeacon (e.g., ComputerHKEY_CURRENT_USERSoftwareMicrosoftEdge DevBLBeacon)

And then you can download the driver by building a URL to the server that looks like this:

https://msedgedriver.azureedge.net/{VERSION}/edgedriver_{ARC}.zip (e.g., https://msedgedriver.azureedge.net/79.0.308.1/edgedriver_win32.zip)

Microsoft Edge should be fully compatible with existing tests written to run in Chrome or other Chromium-based browsers – simply modify the “binary_location” to point to Microsoft Edge, and modify the “executable_path” to point to msedgedriver.exe. MSEdgeDriver.exe currently supports Chrome options, but we do plan on updating the Selenium language bindings in Selenium 4 to account for our new browser. For the time being, the language bindings will default to creating the legacy Microsoft Edge connections, so you will pass in a parameter indicating that these tests should run against the new Microsoft Edge browser:

Here is an example for how you would do that in C#:

Puppeteer

Another popular automation framework is Puppeteer, a Node library which provides a high-level API to control Chromium-based Browsers over the DevTools Protocol. By default, Puppeteer will launch a version of Chromium (the core upon which Google Chrome, Microsoft Edge, Brave, Vivaldi, and others are built). However, you can also pass in the path to the browser exe you would like to run instead.

You would write something like this (in JavaScript):

Automating Internet Explorer mode

In addition to running tests written for Chrome on Microsoft Edge, we’ve also made it easy to migrate tests written for Internet Explorer 11. The new Microsoft Edge includes “Internet Explorer mode,” which allows a tab to render content using IE11 in certain Enterprise contexts (e.g., for Intranet sites or sites  specified by your Enterprise Mode Site List).

The new Microsoft Edge allows you to run IE11 validation for legacy sites in addition to your modern experiences. To run your IE11 tests in Microsoft Edge, download the IEDriverServer from Selenium. Then you must pass in a capability to put Microsoft Edge into IE Mode and then run your tests.

Because this capability puts the whole browser into IE11 Mode, you cannot simultaneously test content that should render in the modern Chromium engine, but you should be able to run all of your IE11 tests and validate the rendering in Microsoft Edge. Note that this code requires an update to IEDriverServer which should be included in the next release of Selenium.

After you download the new IEDriverServer from SeleniumHQ and follow the directions for the “Required Configuration” as documented here, you can run the following code to launch the new Microsoft Edge in IE11 mode and run some tests:

Filing bugs and sharing feedback

As you test your sites in Microsoft Edge, you may encounter issues that appear to be caused by a bug in the browser. For any issue, the quickest way to give feedback is simply to click the “Send feedback” button in the “Help and Feedback” menu (or Alt-Shift-I on Windows). You can describe your issue and share additional details such as screenshots, diagnostic details, or contact information here.

This is also the best place to provide general end-user feedback such as feature suggestions. To date, we’ve received over 230,000 pieces of feedback from users and developers – thank you, and we truly embrace your input!

What’s next for the new Microsoft Edge

Alongside today’s announcements, we’ve updated our Platform Status feature roadmap to reflect the new Microsoft Edge capabilities and an early look at what’s in development for future versions. If you have questions about whether we plan to implement an upcoming HTML/CSS/JS feature, you can search for the corresponding entry here. If you don’t see the feature you’re looking for, simply open an issue on GitHub to get it added.

We’re also continuing to innovate through new standards proposals and by implementing experimental features in Chromium. You can track our focus areas on GitHub in the MSEdgeExplainers repository, where we publish public explainers and “intent to implement” notices as our first step towards shipping new features. We are committed to contributing as a member of the open source community, and have published over 30 explainers to date – and more importantly, we hope to make the web better for everyone.

Get started today by downloading the Microsoft Edge Release Candidate build and adding it to your test matrix, and be sure to share any feedback or issues you might have. We’ll see you in January!

Kyle Pflug, Senior PM Lead, Microsoft Edge
John Jansen, Principal Software Engineering Manager, Microsoft Edge

The post Getting your sites ready for the new Microsoft Edge appeared first on Microsoft Edge Blog.

Developer platform updates at Microsoft Ignite 2019

$
0
0

Earlier this year, we announced some awesome advancements in how developers can better connect with their customers and build people-centric experiences using the Microsoft 365 platform. Today, we’re continuing that story and sharing how you can use these enhancements to build more innovative applications and be more productive. We are focusing on three key areas:

  • Enhancing your applications
  • Optimizing your end to end workflow
  • Providing seamless deployment solutions

Enhancing your applications

As the gap between Win32 and the Universal Windows Platform (UWP) shrinks, you can adopt the features and tools that work best for you. A top request from developers is to use the modern UI framework down-level. Today you can start using WinUI 3 Alpha, an early preview of WinUI 3 that allows you to start writing apps that will work down to the Windows 10 April 2018 Update. You can also use the Uno platform to bring your WinUI code anywhere WebAssembly runs – including Windows 7. Learn more about our vision and see the roadmap for WinUI on GitHub.

Figure 1: Conceptual overview of WinUI 3

Figure 1: Conceptual overview of WinUI 3

React Native for Windows v0.60 now matches React Native v0.60 and is available through the latest vnext npm package. Partners like HP, Citrix, epam, and axsy are incorporating React Native for Windows capabilities such as keyboard and transitions, focus handling, and Acrylic to name a few so their apps can shine on Windows devices. You can also access nearly 80% of React Native core APIs on Windows including support for native extensions. Use our Getting Started Guide and other docs to learn more.

Optimizing your end to end workflow

Today, the final Beta of Microsoft Edge is available. You can install the browser here, select the appropriate channel, and add it to your test matrix now. That way your customers can have a great experience when they begin to upgrade. The new Microsoft Edge will include Internet Explorer mode for enterprise customers, and new privacy features like tracking prevention for end users. Microsoft Edge will be available to download on Windows and macOS in January 2020.

In addition to the web updates, you’ll notice significant improvements to Windows Subsystem for Linux 2 (WSL 2). You can now access your sites and services running in your Linux distros via localhost:port – allowing you to access a node site running in Windows Subsystem for Linux via http://localhost:3000 instead.

Figure 2: http://localhost:3000

Figure 2: http://localhost:3000

WSL 2 distros will also release unused memory back to Windows so that as you run processes inside of your WSL 2 VM, it will grow and shrink to fit your memory needs. Once the Linux process is finished, it will free the memory in the instance and the memory footprint of the WSL 2 VM in Windows.

Providing seamless deployment solutions

It’s great to see all of our MSIX tool partners provide additional tooling and support for MSIX scenarios and enhance the user experience in the application deployment space. Developers can use the MSIX Packaging Tool to improve signing apps and IT Pros can leverage Device Guard signing to sign their packages with their Azure AD tenant. You can also update your packaging workflows and edit your MSIX packages by using the right-click edit option to directly launch the package editor.

With the MSIX App Attach preview you can use a single package type across physical and virtual desktops. This preview provides on-demand app availability, reduces network traffic, and improves user logon times because the applications are now separated from the user profiles and the OS layer and attached to a VM at user login. This optimizes application and OS image management for virtual environments – no need to bloat the VM image with unused applications or manage app streaming infrastructure. A single VM image can be used across different user/app groups.

We know you have a lot of options when choosing which tools and features to use when updating or creating new apps and websites, and we are committed to supporting you with great tools, features, and frameworks. Please continue to share your feedback with us so we can build the best operating system for all your development tasks.

The post Developer platform updates at Microsoft Ignite 2019 appeared first on Windows Developer Blog.


AI and Cortana in Microsoft 365 put people at the center

$
0
0

This week at Ignite, we’re showing you how Microsoft 365 puts people at the center so you can do your best work. Artificial intelligence (AI) in Microsoft 365 is leading this approach, with intelligent, natural, and personalized productivity experiences that help you amplify skills, transform collaboration, and find information. ​ AI in Microsoft 365 is…

The post AI and Cortana in Microsoft 365 put people at the center appeared first on Microsoft 365 Blog.

Announcing Visual Studio Online Public Preview

$
0
0

TL;DR

Available beginning at Microsoft’s Ignite conference as a public preview, Visual Studio Online provides managed, on-demand development environments that can be used for long-term projects, to quickly prototype a new feature, or for short-term tasks like reviewing pull requests. You can work with environments from anywhere using either Visual Studio Code, Visual Studio IDE (in private preview), or the included browser-based editor. 😁

Visual Studio Online - Develop anywhere

Empowering Modern Development

Software developers, and the software development process, live on the bleeding edge of technological trends. We talk to developers every day, and we’ve heard that expectations for innovation continue to increase across all industries and sectors. We’ve also noted resounding feedback that the confluence of current trends demands a new breed of development tools and capabilities.

These trends include:

  • More and more teams are distributed remotely, or leveraging freelancers, which magnify the pain of onboarding new team members without the benefit of a local IT presence.
  • Open source and inner source are making collaboration more important than ever. As a result, developers are working across boundaries in many codebases, often at the same time.
  • Increasing computational and data workloads (e.g. Machine Learning, Artificial Intelligence, Big Data), powered by cloud computing, are naturally shifting development activities beyond the “standard issue development laptop”.
  • The explosion of cloud native development and microservices have enabled developers to use multiple languages and stacks in a single system to take advantage of each technology’s particular strengths.
  • Developers facing expectations for decreased time-to-market are seeking techniques and technologies to help them collaborate more quickly and increase productivity.

As a result of your feedback, these trends, and what we have learned with Visual Studio Code Remote Development, we have been working hard on a new service called Visual Studio Online. Visual Studio Online philosophically (and technically) extends Visual Studio Code Remote Development to provide managed development environments that can be created on-demand and accessed from anywhere. These environments can be used for long-term projects, to quickly prototype a new feature, or for short-term tasks like reviewing pull requests. Additionally, since many companies already have existing infrastructure for development, we made sure that Visual Studio Online can take advantage of those as well. You can connect to your environments from anywhere using either Visual Studio Code, Visual Studio IDE (see below), or the included browser-based editor.

We’re excited to get your feedback as we launch Visual Studio Online into public preview. Read on to learn more about the service and the scenarios it enables, or dive right in with one of our quickstarts.

Rapid Onboarding

Development environments are the cornerstone on which Visual Studio Online is based. They’re where all of the compute associated with software development happens: compiling, debugging, restoring, etc. Whatever your project or task, you can spin up a Visual Studio Online environment from your development tool of choice or our web portal, and the service will automatically configure everything you need: the source code, runtime, compiler, debugger, editor, personal dotfile configurations, relevant editor extensions and more.

Environments are fast to create and disposable, allowing new team members to quickly onboard to a project, or for you to experiment with a new stack, language, or codebase, without worrying about it affecting your local configuration. And since environments share definitions, they are created in a repeatable manner – all but eliminating configuration discrepancies between team members that often lead to “works on my machine” type bugs.

Additionally, environments are completely configurable so they can be precisely tuned as required by your project. Start simple by specifying a few extensions that you want installed, or take full control over the environment by defining your own Dockerfile.

Cloud Powered

Visual Studio Online’s development environments are Azure-hosted and come with all the benefits of the cloud:

  • They scale to meet your needs:
    • Create as many as you want (up to subscription limits) for all your various projects and tasks and throw them away when you’re done.
    • Need a little extra horsepower? Create a premium environment to get all the CPU and RAM you’d need to tackle even the most demanding projects.
  • They have predictable pricing and you only pay for what you use – down to the second. If you create an environment and delete it after 6 minutes and 45 seconds, you’ll only pay for 6 minutes and 45 seconds. Environments also auto-suspend to eliminate accidental runoff costs.
  • Moving your development workload to the cloud boosts your overall computing power so your personal machine can edit media assets, email, chat, stream music, or do anything else, more.

Already have investments in on-premise development environments, or not quite yet ready to move a workload to the cloud? Visual Studio Online also allows you to register and connect your own self-hosted environments, so you can use that already-perfectly-tuned environment and experience some of the benefits of Visual Studio Online, for free!

Your Favorite Tools

Visual Studio Online supports three editors: Visual Studio Code, our no-install browser-based editor, and Visual Studio IDE (see below). This allows you to use the tool you’re most comfortable with, in any language or framework.

By installing the Visual Studio Online extension you can use Visual Studio Code, the streamlined code editor with support for operations like debugging, task running, and version control. It aims to provide just the tools a developer needs for a quick code-build-debug cycle. It’s free, built on open source and now enhanced with cloud powered development environments.

Visual Studio Online’s browser-based editor adds the ability to connect and code from literally anywhere, and its fully powered with Visual Studio Code under the hood. Gone are the days of lugging around heavy dev machines on the road or to a coffee shop. Instead, travel light knowing you’ve got the full computing power of Azure, just a new browser tab away.

We’re also proud to announce that Visual Studio IDE’s support for Visual Studio Online is in private preview at Ignite. Developers will now have the option to use a full-fledged IDE with the entire toolset, from initial design to final deployment, enhanced with the benefits of Visual Studio Online. Along with this private preview, we’re also introducing the capability to create Windows based Visual Studio Online environments, expanding the set of workloads the service will support. Sign up now to be added to the wait list.

Along the way we’ve also learned that developers not only want to use the right tool for the job, but they are also highly opinionated about their development environment, and commonly spend countless hours personalizing their editor and terminal. To address this, Visual Studio Online’s flexible personalization features make any environment you connect to feel familiar, as if you’ve never left home, whichever editor you decide to use.

Even better, you can freely extend these capabilities since Visual Studio Online has support for the rich ecosystem of extensions in the Visual Studio Marketplace.

Effortless Remote Debugging

Once connected to your Visual Studio Online environment, simply run your web app or API as you usually would, and we automatically forward it, so that it’s accessible to you – and only you. It behaves just like your traditional, local dev workflow.

In addition, we’ll soon be introducing support for app casting, that will allow you to remotely interact with and share a running GUI application.

Built in Collaboration

On top of all of this, Visual Studio Online’s environments come with built in collaboration tools like IntelliCode and Live Share. IntelliCode helps enhance individual productivity by instilling AI-assisted intelligence into the editor. It does this by making things like auto-completion smarter with “implicit collaboration” based on an understanding of how APIs are used across thousands of open-source GitHub repositories. Live Share directly facilitates real-time collaboration by enabling developers to edit and debug together, even if they aren’t all Visual Studio Online users, or prefer a different editor.

And More to Come!

Visual Studio Online is in public preview at Ignite. That means that now is a great time to try it out and share your feedback. We’re eagerly looking forward to working with the community to understand the best ways to make Visual Studio Online even better.

I want to thank all the users who have submitted feedback already – you’re the ones who have made the service as great as it is today – and I can’t wait to hear from so many more of you.

Next Steps

If you’d like to learn more, head over to our product page or “What is Visual Studio Online?” documentation.

To try the service, follow along with our Visual Studio Code or browser-based experience quickstarts.

As mentioned above, if you’re interested in Visual Studio IDE support and Windows based environments, sign up for our private preview and we’ll do our best to grant you access as soon as possible.

Finally, feel free to report any feedback you have in our issue tracker on GitHub.

We can’t wait to hear what you think!

Thanks,
Nik Molnar & the entire Visual Studio Online team 👋

The post Announcing Visual Studio Online Public Preview appeared first on Visual Studio Blog.

ASP.NET Core updates in .NET Core 3.1 Preview 2

$
0
0

.NET Core 3.1 Preview 2 is now available. This release is primarily focused on bug fixes, but it contains a few new features as well.

Here’s what’s new in this release for ASP.NET Core:

  • New component tag helper
  • Prevent default actions for events in Blazor apps
  • Stop event propagation in Blazor apps
  • Validation of nested models in Blazor forms
  • Detailed errors during Blazor app development

See the release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.1 Preview 2 install the .NET Core 3.1 Preview 2 SDK.

If you’re on Windows using Visual Studio, for the best experience we recommend installing the latest preview of Visual Studio 2019 16.4. Installing Visual Studio 2019 16.4 will also install .NET Core 3.1 Preview 2, so you don’t need to separately install it. For Blazor development with .NET Core 3.1, Visual Studio 2019 16.4 is required.

Alongside this .NET Core 3.1 Preview 2 release, we’ve also released a Blazor WebAssembly update. To install the latest Blazor WebAssembly template also run the following command:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.1.0-preview2.19528.8

Upgrade an existing project

To upgrade an existing ASP.NET Core 3.1 Preview 1 project to 3.1 Preview 2:

  • Update all Microsoft.AspNetCore.* package references to 3.1.0-preview2.19528.8

See also the full list of breaking changes in ASP.NET Core 3.1.

That’s it! You should now be all set to use .NET Core 3.1 Preview 2!

New component tag helper

Using Razor components from views and pages is now more convenient with the new component tag helper.

Previously, rendering a component from a view or page involved using the RenderComponentAsync HTML helper.

@(await Html.RenderComponentAsync<Counter>(RenderMode.ServerPrerendered, new { IncrementAmount = 10 }))

The new component tag helper simplifies the syntax for rendering components from pages and views. Simply specify the type of the component you wish to render as well as the desired render mode. You can also specify component parameters using attributes prefixed with param-.

<component type="typeof(Counter)" render-mode="ServerPrerendered" param-IncrementAmount="10" />

The different render modes allow you to control how the component is rendered:

RenderMode Description
Static Renders the component into static HTML.
Server Renders a marker for a Blazor Server application. This doesn’t include any output from the component. When the user-agent starts, it uses this marker to bootstrap the Blazor app.
ServerPrerendered Renders the component into static HTML and includes a marker for a Blazor Server app. When the user-agent starts, it uses this marker to bootstrap the Blazor app.

Prevent default actions for events in Blazor apps

You can now prevent the default action for events in Blazor apps using the new @oneventname:preventDefault directive attribute. For example, the following component displays a count in a text box that can be changed by pressing the “+” or “-” keys:

<p>Press "+" or "-" in change the count.</p>
<input value="@count" @onkeypress="@KeyHandler" @onkeypress:preventDefault />

@code {
    int count = 0;

    void KeyHandler(KeyboardEventArgs ev)
    {
        if (ev.Key == "+")
        {
            count++;
        }
        else if (ev.Key == "-")
        {
            count--;
        }
    }
}

The @onkeypress:preventDefault directive attribute prevents the default action of showing the text typed by the user in the text box. Specifying this attribute without a value is equivalent to @onkeypress:preventDefault="true". The value of the attribute can also be an expression: @onkeypress:preventDefault="shouldPreventDefault". You don’t have to define an event handler to prevent the default action; both features can be used independently.

Stop event propagation in Blazor apps

Use the new @oneventname:stopPropagation directive attribute to stop event propagation in Blazor apps.

In the following example, checking the checkbox prevents click events from the child div from propagating to the parent div:

<input @bind="stopPropagation" type="checkbox" />
<div @onclick="OnClickParentDiv">
    Parent div
    <div @onclick="OnClickChildDiv" @onclick:stopPropagation="stopPropagation">
        Child div
    </div>
</div>

<button @onclick="OnClick">Click me!</button>

@code {
    bool stopPropagation;

    void OnClickParentDiv() => Console.WriteLine("Parent div clicked.");
    void OnClickChildDiv() => Console.WriteLine("Child div clicked.");
}

Detailed errors during Blazor app development

When your Blazor app isn’t functioning properly during development, it’s important to get detailed error information so that you can troubleshoot and fix the issues. Blazor apps now display a gold bar at the bottom of the screen when an error occurs.

During development, in Blazor Server apps, the gold bar will direct you to the browser console where you can see the exception that has occurred.

Blazor detailed errors in development

In production, the gold bar notifies the user that something has gone wrong, and recommends the user to refresh the browser.

Blazor detailed errors in production

The UI for this error handling experience is part of the updated Blazor project templates so that it can be easily customized:

_Host.cshtml

<div id="blazor-error-ui">
    <environment include="Staging,Production">
        An error has occurred. This application may no longer respond until reloaded.
    </environment>
    <environment include="Development">
        An unhandled exception has occurred. See browser dev tools for details.
    </environment>
    <a href="" class="reload">Reload</a>
    <a class="dismiss">🗙</a>
</div>

Validation of nested models in Blazor forms

Blazor provides support for validating form input using data annotations with the built-in DataAnnotationsValidator. However, the DataAnnotationsValidator only validates top-level properties of the model bound to the form.

To validate the entire object graph of the bound model, try out the new ObjectGraphDataAnnotationsValidator available in the experimental Microsoft.AspNetCore.Blazor.DataAnnotations.Validation package:

<EditForm Model="@model" OnValidSubmit="@HandleValidSubmit">
    <ObjectGraphDataAnnotationsValidator />
    ...
</EditForm>

The Microsoft.AspNetCore.Blazor.DataAnnotations.Validation is not slated to ship with .NET Core 3.1, but is provided as an experimental package to get early feedback.

Give feedback

We hope you enjoy the new features in this preview release of ASP.NET Core! Please let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

The post ASP.NET Core updates in .NET Core 3.1 Preview 2 appeared first on ASP.NET Blog.

Use the power of cloud intelligence to simplify and accelerate IT and the move to a modern workplace

$
0
0

Core to modern management and security is delivering users the modern workplace that empowers them to achieve more across all their devices, while providing the required security and protection of the organization’s assets. From Microsoft Endpoint Manager to the new Productivity Score to updates to expansion of the Desktop App Assure program, we're announcing a lot of new capabilities to make you the hero in your organization.

The post Use the power of cloud intelligence to simplify and accelerate IT and the move to a modern workplace appeared first on Microsoft 365 Blog.

Some Thoughts on Website Boundaries

$
0
0
In the coming weeks, we will update the Bing Webmaster Guidelines to make them clearer and more transparent to the SEO community. This major update will be accompanied by blog posts that share more details and context around some specific violations.
In the first article of this series, we are introducing a new penalty to address “inorganic site structure” violations. This penalty will apply to malicious attempts to obfuscate website boundaries, which covers some old attack vectors (such as doorways) and new ones (such as subdomain leasing).

What is a website anyway?

One of the most fascinating aspects of building a search engine is developing the infrastructure that gives us a deep understanding of the structure of the web. We’re talking trillions and trillions of URLs, connected with one another by hyperlinks.
The task is herculean, but fortunately we can use some logical grouping of these URLs to make the problem more manageable – and understandable by us, mere humans! The most important of these groupings is the concept of a “website”.
We all have some intuition of what a website is. For reference, Wikipedia defines a website as “a collection of related network web resources, such as web pages, multimedia content, which are typically identified with a common domain name.”
It is indeed very typical that the boundary of a website is the domain name. For example, everything that lives under the xbox.com domain name is a single website.
Fig. 1 – Everything under the same domain name is part of the same website.
Fig. 1 – Everything under the same domain name is part of the same website.

A common alternative is the case of a hosting service where each subdomain is its own website, such as wordpress.com or blogspot.com. And there are some (less common) cases where each subdirectory is its own website, similar to what GeoCities was offering in the late 90s.
Fig. 2 – Each subdomain is its own separate website.
Fig. 2 – Each subdomain is its own separate website.

 

Why does it matter?

Some fundamental algorithms used by search engines differentiate between URLs that belong to the same website and URLs that don’t. For example, it is well known that most algorithms based on the link graph propagate link value differently whether a link is internal (same site) or external (cross-site).
These algorithms also use site-level signals (among many others) to infer the relevance and quality of content. That’s why pages on a very trustworthy, high-quality website tend to rank more reliably and higher than others, even if such pages are new and didn’t accumulate a lot of page-level signals.

When things go wrong

Stating the obvious, we can’t have people manually review billions of domains in order to assess what is a website. To solve this problem, like many of the other problems we need to solve at the scale of the web, we developed sophisticated algorithms to determine website boundaries.
The algorithm gets it right most of the time. Occasionally it gets it wrong, either conflating two websites into one or viewing a single website as two different ones. And sometimes there’s no obvious answer, even for humans! For example, if your business operates in both the US and the UK, with content hosted on two separate domains (respectively a .com domain and a .co.uk domain), you can be seen as running either one or two websites depending on how independent your US and UK entities are, how much content is shared across the two domains, how much they link to each other, etc.
However, when we reviewed sample cases where the algorithm got it wrong, we noticed that the most common root cause was that the website owner actively tried to misrepresent the website boundary.
It can be indeed very tempting to try to fool the algorithm. If your internal links are viewed as external, you can get a nice rank boost. And if you can propagate some of the site-level signals to pages that don’t technically belong to your website, these pages can get an unfair advantage.

Making things right

In order to maintain the quality of our search results while being transparent to the SEO community, we are introducing new penalties to address “inorganic site structure”. In short, creating a website structure that actively misrepresents your website boundaries is going to be considered a violation of the Bing Webmaster Guidelines and will potentially result in a penalty.
Some “inorganic site structure” violations were already covered by other categories, whereas some of them were not. To understand better what is active misrepresentation, let’s look at three examples.

PBNs and other link networks

While not all link networks misrepresent website boundaries, there are many cases where a single website is artificially split across many different domains, all cross-linking to one another, for the obvious purpose of rank boosting. This is particularly true of PBNs (private blog networks).
Fig. 3 – All these domains are effectively the same website.
Fig. 3 – All these domains are effectively the same website.
This kind of behavior is already in violation of our link policy. Going forward, it will be also in violation of our “inorganic site structure” policy and may receive additional penalties.

Doorways and duplicate content

Doorways are pages that are overly optimized for specific search queries, but which only redirect or point users to a different destination. The typical situation is someone spinning up many different sites hosted under different domain names, each targeting its own set of search queries but all redirecting to the same destination or hosting the same content.
Fig. 4 – All these domains are effectively the same website (again).
Fig. 4 – All these domains are effectively the same website (again).

Again, this kind of behavior is already in violation of our webmaster guidelines. In addition, it is also a clear-cut example of “inorganic site structure”, since we have ultimately only one real website, but the webmaster tried to make it look like several independent websites, each specialized in its own niche.
Note that we will be looking for malicious intent before flagging sites in violation of our “inorganic site structure” policy. We acknowledge that duplicate content is unavoidable (e.g. HTTP vs. HTTPS), however there are simple ways to declare one website or destination as the source of truth, whether it’s redirecting duplicate pages with HTTP 301 or adding canonical tags pointing to the destination. On the other hand, violators will generally implement none of these, or will instead use sneaky redirects.

Subdomain or subfolder leasing

Over the past few months, we heard concerns from the SEO community around the growing practice of hosting third-party content or letting a third party operate a designated subdomain or subfolder, generally in exchange for compensation. This practice, which some people call “subdomain (or subfolder) leasing”, tends to blur website boundaries. Most of the domain is a single website except for a single subdomain or subfolder, which is a separate website operated by a third party.
In most cases that we reviewed, the subdomain had very little visibility for direct navigation from the main website. Concretely, there were very few links from the main domain to the subdomain and these links were generally tucked all the way at the bottom of the main domain pages or in other obscure places. Therefore, the intent was clearly to benefit from site-level signals, even though the content on the subdomain had very little to do with the content on the rest of the domain.
Fig. 5 – The domain is mostly a single website, to the exception of one subdomain.
Fig. 5 – The domain is mostly a single website, to the exception of one subdomain.

Some people in the SEO community argue that it’s fair game for a website to monetize their reputation by letting a third party buy and operate from a subdomain. However, in this case the practice equates to buying ranking signals, which is not much different from buying links.
Therefore, we decided to consider “subdomain leasing” a violation of our “inorganic site structure” policy when it is clearly used to bring a completely unrelated third-party service into the website boundary, for the sole purpose of leaking site-level signals to that service. In most cases, the penalties issued for that violation would apply only to the leased subdomain, not the root domain.

Your responsibility as domain owner

This article is also an opportunity to remind domain owners that they are ultimately responsible for the content hosted under their domain, regardless of the website boundaries that we identify. This is particularly true when subdomains or subfolders are operated by different entities.
While clear website boundaries will prevent negative signals due to a single bad actor from leaking to other content hosted under the same domain, the overall domain reputation will be affected if a disproportionate number of websites end up in violation of our webmaster guidelines. Taking an extreme case, if you offer free hosting on your subdomains and 95% of your subdomains are flagged as spam, we will expand penalties to the entire domain, even if the root website itself is not spam.
Another unfortunate case is hacked sites. Once a website is compromised, it is typical for hackers to create subfolders or subdirectories containing spam content, sometimes unbeknownst to the legitimate owner. When we detect this case, we generally penalize the entire website until it is clean.

Learning from you

If you believe you have been unfairly penalized, you can contact Bing Webmaster Support and file a reconsideration request. Please document the situation as thoroughly and transparently as possible, listing all the domains involved. However, we cannot guarantee that we will lift the penalty.
Your feedback is valuable to us! Clarifying our existing duplicate content policy and our stance on subdomain leasing were two feedbacks we heard from the SEO community, and we hope this article achieved both. As we are in the middle of a major update of the Bing Webmaster Guidelines, please feel free to reach out to us and share feedback on Twitter or Facebook.

Thank you,
Frederic Dubut and the Bing Webmaster Tools Team
 

 

Viewing all 5971 articles
Browse latest View live