Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Visual Studio 2019 for Mac version 8.3 Preview 3, now available

$
0
0

We’re in the endgame of finishing the Visual Studio for Mac 8.3 release, and we need your help testing out Preview 3 today. To use it, install the latest Visual Studio 2019 for Mac release, and then update to the Preview channel. This is one of the most exciting releases for us to date, with a focus on some top feedback themes we’ve heard our avid users:
• Improving the ASP.NET Core developer workflow (including brand new web editors).
• Enabling the development of libraries targeting multiple .NET frameworks.

Along with this work, we’ve fixed numerous bugs and improved the overall IDE performance. You can read all about the work to date (including Previews 1 & 2) in our release notes.

Optimizing the ASP.NET Core developer workflow  

major focus of the Visual Studio for Mac v8.3 release is optimizing the ASP.NET Core developer workflow. We’ve heard from hundreds of .NET Core developers and focused our efforts on addressing the community’s feedback. In this preview, we’re introducing new web editors based on the same editor (and code) as Visual Studio on Windows, and support for managing NuGet packages across multiple projects at solution level. This is in addition to support for file nestinglaunchSettings.json, and launch in target browser released in prior Preview releases. 

All web editors, now updated

Since the initial release of Visual Studio 2019 for Mac in April, we’ve been working to update all the editors within the IDE. In v8.1, we introduced the new C# editor. v8.2 brought the new XAML editor to Visual Studio for Mac. In v8.3, we’re updating all the web editors! The new web editors are based on the same native UI as the C# and XAML editors and provide all the advanced features recently introduced to Visual Studio for Mac, such as multi-caret editing, RTL support and native input supportIn addition to these highlevel editor features, the new web experience is also powered by the same core as Visual Studio on Windows, so you can expect the same language service features that make Visual Studio such a productive IDE. These language services provide vital features, such as IntelliSense as well as code formatting, syntax highlighting and navigation support.   

The new editors support a variety of web files, including HTML, CSHTML, JS, CSS as well as CSHTML embedded support for JS, C# and CSS! This means you get all features as appropriate for the file types you are working within, so you will see advanced IntelliSense in JS, CSHTML and more. We have also improved support for LESS and SASS files. The web experience in Visual Studio for Mac has never been better! 

JavaScript editor with code completion suggestions, in Visual Studio for Mac

NuGet solution-level package management 

We’ve also added support for NuGet solution-level package management functionality. As the number of projects grow within a solution, it becomes difficult to keep packages updated across the projects. With the improvements we made in this area, now it’s easier to consolidate to a single version of packages across the solution. 

NuGet package management dialog, showing package consolidation in Visual Studio for Mac

Multi-Targeting 

When building modern .NET libraries, it’s common for library authors to target a variety of platforms and devices. .NET Standard is the best solution for adding support for multiple platforms, but sometimes it’s necessary to use APIs in .NET frameworks that don’t support it. In that casethe best solution is to use multi-targeting to build for multiple .NET frameworks. Recently, we included support for working on projects that support multi-targeting, and in Preview 3 we’ve continued to improve upon that experience. When editing code in one of these projects, you can use a Target Framework drop down at the top of the editor window to focus your editing experience on a specific target framework. 

Dependencies are also now displayed broken down by target framework: 

Viewing project dependencies separated by Target Framework, in Visual Studio for Mac

Additionally, when running your project you can choose the target framework against which to debug.

Let us know what you think!

Please download and try out the v8.3 Preview 3 release today, by updating to the latest release in the Preview channel! We’ll continue our work on improving the other code editors in the IDE as well as the features we planned on our roadmap.

If you run into any issues with the v8.3 Preview release, please use the Help > Report a Problem menu in the IDE to let us know about it. You can also provide suggestions for future improvements by using the Provide a Suggestion menu.

report a problem menu

Finally, make sure to follow us on Twitter at @VisualStudioMac to stay up to date on the latest Visual Studio for Mac news and let us know what your experience has been like. We look forward to hearing from you!

The post Visual Studio 2019 for Mac version 8.3 Preview 3, now available appeared first on The Visual Studio Blog.


Announcing Entity Framework Core 3.0 Preview 9 and Entity Framework 6.3 Preview 9

$
0
0

The Preview 9 versions of the EF Core 3.0 package and the EF 6.3 package are now available for download from nuget.org.

These are the last planned previews before we release the final versions later this month. We have almost completely stopped making changes to the product code, but we are still actively monitoring feedback for any important bugs that may be reported. So please install the previews to validate that all the functionality required by your applications is available and works correctly, and report any issues you find to either the EF Core issue tracker or the EF 6 issue tracker on GitHub.

Even if we may not be able to fix many more issues in EF Core 3.0 at this point, we’ll consider important bugs and regressions for the upcoming 3.1 release.

What’s new in EF Core 3.0 Preview 9

Besides all the other improvements in EF Core 3.0, preview 9 includes fixes for more than 100 issues that we resolved since Preview 8. Here are a few highlights:

  • Support for translating queries that project a single result form a collection using window functions (issue #10001)
  • Support for translating queries with constants or parameters in the GROUP BY key (issue #14152).
  • Improvements to our thread concurrency detection logic to reduce false positives (issue #14534).

Consider installing daily builds

In Preview 9 the functionality of the in-memory provider is still very limited, but this is already fixed in our daily builds. In fact, we have already fixed more than 20 issues that aren’t included in Preview 9, and we may still fix a few more before RTM.

Detailed instructions to install daily builds, including the necessary NuGet feeds, can be found in the How to get daily builds of ASP.NET Core article.

Common workarounds for LINQ queries

The LINQ implementation in EF Core 3.0 is designed to work very differently from the one used in previous versions of EF Core. For this reason, you are likely to run into issues with LINQ queries, especially when upgrading existing applications. Here are some workarounds that might help you get things working:

  • Try a daily build (as previously mentioned) to confirm that you aren’t hitting an issue that has already been fixed.
  • Switch to client evaluation explicitly: If your query filters data based on an expression that cannot be translated to SQL, you may need to switch to client evaluation explicitly by inserting a call to either AsEnumerable()AsAsyncEnumerable()ToList(), or ToListAsync() in the middle of the query. For example, the following query will no longer work in EF Core 3.0 because one of the predicates in the where clause requires client evaluation:
    var specialCustomers = context.Customers
      .Where(c => c.Name.StartsWith(n) && IsSpecialCustomer(c));

    But if you know it is reasonable to process part of the filter on the client, you can rewrite the query as:

    var specialCustomers = context.Customers
      .Where(c => c.Name.StartsWith(n))
      .AsEnumerable() // Start using LINQ to Objects (switch to client evaluation)
      .Where(c => IsSpecialCustomer(c));

    Remember that this is by-design: In EF Core 3.0, LINQ operations that cannot be translated to SQL are no longer automatically evaluated on the client.

  • Use raw SQL queries: If some expression in your LINQ query is not translated correctly (or at all) to SQL, but you know what translation you would want to have generated, you may be able to work around the issue by executing your own SQL statement using the FromSqlRaw() or FromSqlInterpolated() methods.

    Also make sure an issue exists in our issue tracker on GitHub to support the translation of the specific expression.

Breaking changes

All breaking changes in this release are listed in the Breaking changes in EF Core 3.0 article. We keep the list up to date on every preview, with the most impactful changes near the top of the list, to make it easier for you to react.

Obtaining the Preview 9 packages

EF Core 3.0 is distributed exclusively as NuGet packages. As usual, add or upgrade the runtime to Preview 9 via the NuGet user interface, the Package Manager Console in Visual Studio, or via the dotnet add package command. In all cases, include the option to allow installing pre-release versions. For example, you can execute the following command to install the SQL Server provider:

dotnet add package Microsoft.EntityFrameworkCore.SqlServer --version 3.0.0-*

With .NET Core 3.0, the dotnet ef command-line tool is no longer included in the .NET Core SDK. Before you can execute EF Core migration or scaffolding commands, you’ll have to install it as either a global or local tool. Due to limitations in dotnet tool install, installing preview tools requires specifying at least part of the preview version on the installation command. For example, to install the 3.0 Preview 9 version of dotnet ef as a global tool, execute the following command:

dotnet tool install --global dotnet-ef --version 3.0.0-*

What’s new in EF 6.3 Preview 9

All of the work planned for the EF 6.3 package has been completed. We are now focused on monitoring your feedback and fixing any important bugs that may be reported.

As with EF Core, any bug fixes that happened after we branched for Preview 9 are available in our daily builds.

How to work with EDMX files in .NET Core projects

On the tooling side, we plan to release an updated EF6 designer in an upcoming update of Visual Studio 2019 which will work with projects that target .NET Core (tracked in issue #883).

Until this new version of the designer is available, we recommend that you work with your EDMX files inside projects that target .NET Framework. You can then add the EDMX file and the generated classes for the entities and the DbContext as linked files to a .NET Core 3.0 or .NET Standard 2.1 project in the same solution. For example, the project file for the .NET Core project can include the linked files like this:

<ItemGroup>
    <EntityDeploy Include="..EdmxDesignHostEntities.edmx" Link="ModelEntities.edmx" />
    <Compile Include="..EdmxDesignHostEntities.Context.cs" Link="ModelEntities.Context.cs" />
    <Compile Include="..EdmxDesignHostThing.cs" Link="ModelThing.cs" />
    <Compile Include="..EdmxDesignHostPerson.cs" Link="ModelPerson.cs" />
  </ItemGroup>

Note that the EDMX file is linked with the EntityDeploy build action. This is a special MSBuild task (now included in the EF 6.3 package) that takes care of adding the EF model into the target assembly as embedded resources (or copying it as files in the output folder, depending on the setting on the Metadata Artifact Processing setting in the EDMX). For more details on how to get this set up, see our EDMX .NET Core sample.

You can choose to copy the files instead of linking them, but keep in mind that due to a bug in current builds of Visual Studio, copying the files from the .NET Framework project to the .NET Core project within Solution Explorer may cause hangs, so it is better to copy the files from the command line.

Feedback requested: Should we build a dotnet ef6 tool?

We are also seeking feedback and possible contributions to enable a cross-platform command line experience for migrations commands, similar to dotnet ef but for EF6 (tracked in issue #1053). If you would like to see this happen, or if you would like to contribute to it, please vote or comment on the issue.

Weekly status updates

If you’d like to track our progress more closely, we now publish weekly status updates to GitHub. We also post these status updates to our Twitter account, @efmagicunicorns.

Thank you

Once more, thank you for trying our preview bits, and for all the bug reports and contributions that will help make EF Core 3.0 and EF 6.3 much better releases.

The post Announcing Entity Framework Core 3.0 Preview 9 and Entity Framework 6.3 Preview 9 appeared first on .NET Blog.

Azure Marketplace new offers – Volume 43

$
0
0

We continue to expand the Azure Marketplace ecosystem. For this volume, 94 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

Applications

AZULINK

AZULINK: Get your application fully managed on Azure with a one-stop-shop partner committed to results and centralizing governance and scalability of your Azure services on IaaS and PaaS.

BI for Dynamics 365FO

BI for Dynamics 365FO: Enable your organization to do more with company data. With Hillstar's standard BI connector for Dynamics 365 for Finance and Operations, users can easily slice and dice through reports and drill down to deeper levels to see more detail.

Blender with Flamenco worker on Windows - ATLG

Blender with Flamenco worker on Windows - ATLG: Blender with Flamenco on Azure provides an easy-to-deploy Flamenco manager/worker environment that can be plugged into Blender Cloud. This marketplace image serves the "render vm" role in the worker pool.

Build Agent PRO for Azure DevOps

Build Agent PRO for Azure DevOps: This template offers a Linux-based build agent for Azure DevOps that can build and deploy .NET CORE, Angular, Node.JS, Java, C/C++, and Container projects by default. An emulation for ARM devices is also included.

Cisco Firepower Management Center Virtual (FMCv)

Cisco Firepower Management Center Virtual (FMCv): Control access to your network, control application use, and defend against known attacks. Use AMP and sandboxing technologies to address unknown attacks and track malware infections throughout your network.

Cobra - Commercial Broker Assistant

Cobra - Commercial Broker Assistant: Cobra includes everything brokers need for business, including Office 365 integration for easy communication with clients and intuitive storage for client data, contracts, and damages.

DataOne

Data One: Data One can host, design, build, and manage existing and new reports for organizations that don’t have the capacity to manage their BI demands. The Data One platform simplifies reporting through Northern Data's dynamic BI portal.

DataRoad Reflect

DataRoad Reflect: DataRoad Reflect is a rapid data movement solution that lets you focus on delivering advanced analytics, machine learning, and artificial intelligence instead of spending hours programming data migrations.

desknets NEO

desknets NEO: desknets NEO reduces the burden of operation management with extensive administrative functions, such as user and organization registration information management and flexible access rights management. This application is available only in Japanese.

Docker Community Edition With Ubuntu 1804 Lts

Docker Community Edition with Ubuntu 18.04 Lts: Docker Community Edition (CE) is ideal for individual developers and small teams looking to get started with Docker and experimenting with container-based apps.

EVE - cloud-based live captions for your event

EVE – cloud-based live captions for your event: EVE not only helps organizations comply with accessibility standards, it is also an additional medium, capturing every spoken word and sharing a transcript after a speech for further actions, including subtitles and SEO.

FM Converge on Azure

FM Converge on Azure: FM Converge on Azure is a highly responsive, cross-asset front-office/middle-office/operations/risk platform for pre-trade pricing, structuring, book valuation, and managing enterprise risk for a wide variety of financial instruments.

Get Azure Ops Data into Splunk - in 3 minutes

Get Azure Ops Data into Splunk - in 3 minutes: StreamWeaver offers a systematic, automated approach to distributing valuable operations data, including event, metric, topology, and log information, from all domains and clouds to the appropriate applications and teams.

Go timesheets, expense and leave software

Go timesheets, expense and leave software: Go is a scalable, web-based and mobile app for managing timesheets, expenses, and leave. Users can connect from anywhere – in the office or in the field – to submit time, leave, and expenses with attached receipts.

Hyper-Q Express Edition for Teradata to SQL DW

Hyper-Q Express Edition for Teradata to SQL DW: Hyper-Q takes SQL extensions and scripts written for Teradata and makes them interoperable with Azure SQL Data Warehouse while requiring little to no change to the business logic your company relies on.

Imredi Audit

Imredi Audit: The Imredi Audit solution is designed to audit stores, collect and analyze data from retail outlets, and help manage field employees. This application is available only in Russian.

iNAS

iNAS: Unissoft is pleased to provide its iNAS cloud-based record-keeping solution for Azure and Office 365 users. iNAS protects records from inadvertent or unauthorized alteration, deletion, access, and retrieval while monitoring the integrity of records through an audit trail.

Indoorway InSite 40

Indoorway InSite 4.0: Indoorway provides accurate data and useful analytics about the movement of assets in industrial sites. Locate in real time any moving resources relevant to key production and intralogistics processes.

IoT Core Services

IoT Core Services: IoT Core Services by conplement AG provides a fast and secure end-to-end solution for device/machine connections in the Internet of Things and digital value-added services. This application is available only in German.

Jenkins With CentOS 76

Jenkins with CentOS 7.6: Jenkins is an open source automation server written in Java. Jenkins helps automate the non-human part of the software development process, with continuous integration and facilitating technical aspects of continuous delivery.

Lamp With CentOS 76

Lamp with CentOS 7.6: LAMP is an archetypal model of web service stacks, named as an acronym of its original components: Linux operating system, Apache HTTP server, MySQL relational database management system, and PHP programming language.

Lamp With Ubuntu Server 1804 Lts

Lamp with Ubuntu Server 18.04 Lts: LAMP is an archetypal model of web service stacks, named as an acronym of its original components: Linux operating system, Apache HTTP server, MySQL relational database management system, and PHP programming language.

LANCOM vRouter

LANCOM vRouter: The LANCOM vRouter is a software-based router for operation in virtualized environments. With its comprehensive range of functions and numerous security features based on the operating system LCOS, it offers a leading basis for modern infrastructures.

Mediant CE Session Border Controller (SBC)

Mediant CE Session Border Controller (SBC): AudioCodes' Mediant Session Border Controllers deliver seamless connectivity, enhanced security, and quality assurance for enterprise and service provider VoIP networks.

MinIO Helm Chart

MinIO Helm Chart: MinIO is an object storage server mainly used for storing unstructured data such as photos, videos, and log files.

Mojro Technologies Private Limited

Mojro Technologies Private Limited: Mojro's proprietary algorithms are deployed to perform space and route optimization together at scale and enable your organization to automate the planning and execution of logistics.

Movie Viewer

Movie Viewer: Movie Viewer is a virtual editing tool that enables you to create clips from multiple videos and combine them into playlists. This application is available only in Japanese.

NGINX Plus Developer Edition

NGINX Plus Developer Edition: NGINX Plus brings enterprise-ready features such as application load balancing, monitoring, and advanced management to your Microsoft Azure application stack.

Nginx With Ubuntu Server 1804 Lts

Nginx with Ubuntu Server 18.04 Lts: NGINX is open source software for web serving, reverse proxying, caching, load balancing, media streaming, and more. NGINX started as a web server designed for maximum performance and stability.

Objectivity Metadata Connect

Objectivity Metadata Connect: Metadata Connect allows you to define information about data from any external source and form connections within it. You can then understand how data interacts as it is changed and perform powerful navigational and pathfinding queries.

On-Net Integration Business Series

On-Net Integration Business Series: On-Net Integration Business Series on Microsoft Azure boosts operational efficiency with a wide range of functions. This application is available only in Japanese.

Opus Suite

Opus Suite: Opus Suite gives you fast, accurate analyses, optimization, simulation, and answers throughout your system's lifecycle, helping you take control over performance and lifecycle cost.

Orange HRM

OrangeHRM: OrangeHRM is a free, comprehensive human resource management system that captures the essential functionalities required for any enterprise.

ProScheduler WFM

ProScheduler WFM: ProScheduler is an enterprise-class workforce management system offering cutting-edge optimization and real-time features. ProScheduler is quick to implement, easy to learn, and typically offers a return on investment within six months.

PyTorch from NVIDIA

PyTorch from NVIDIA: PyTorch is a GPU-accelerated tensor computation framework with a Python front end. This image bundles NVIDIA's container for PyTorch into the NGC base image for Microsoft Azure.

PyTorch Helm Chart

PyTorch Helm Chart: PyTorch is a deep learning platform that accelerates the transition from research prototyping to production deployment. This Bitnami image includes Torchvision for specific computer vision support.

Remote Desktop Services 2019 RDS Farm

Remote Desktop Services 2019 RDS Farm: Set up a basic remote desktop services (RDS) IaaS farm on Azure for testing or a production environment. Scale from 1 RDS host to 50 RDS hosts and allow users to connect to published desktops and applications from any device or OS.

SecureBox

SecureBox: SecureBox is a secure cloud file sharing platform. Access your data anywhere and back up, view, sync, and share your data on Microsoft Azure.

SFTP Gateway

SFTP Gateway: Built on the base Ubuntu 18.04 image from Canonical, SFTP Gateway is a secure-by-default, pre-configured SFTP server that saves files to Azure Blob Storage. Use SFTP Gateway as a traditional SFTP server or to upload files to Azure storage.

SmartGov for Administration

SmartGov for Administration: Proven in over 30 government departments and SOEs, SmartGov for Administration is a tried and tested solution for the digitization of some of the most problematic processes in the South African public sector back office.

Speech to Text

Speech to Text: Zoom Media offers its highly accurate Speech to Text service in 10 languages (Arabic, Danish, Dutch, English US, Filipino, Finnish, Flemish, Italian, Norwegian, and Swedish). All models can be used in batch or real time and are customizable upon request.

TensorFlow from NVIDIA

TensorFlow from NVIDIA: TensorFlow is an open source software library for numerical computation using data flow graphs. This image bundles NVIDIA's GPU-optimized TensorFlow container along with the base NGC Image.

Theobald Software Xtract IS for Azure

Theobald Software Xtract IS for Azure: With Xtract IS for Azure you can either develop new SSIS packages from scratch or use your existing SSIS packages developed with Xtract IS Ultimate/Enterprise. Develop on-premises and run in the cloud.

Tidal Migrations - Premium Insights for Database

Tidal Migrations - Premium Insights for Database: Tidal Migrations provides your team with a simple, fast, and cost-effective cloud migration management solution. This add-on empowers your team with actionable insights on the databases you plan to migrate to Azure.

Total Access Control

Total Access Control: Total Access Control from PortSys offers a Zero Trust approach to secure access to valuable resources wherever they may reside, locally or in the cloud. This single, scalable solution manages access across the enterprise.

Wanos Wan Optimization (SD-WAN)

Wanos WAN Optimization (SD-WAN): Reduce bandwidth and boost remote network access to Azure resources through TCP acceleration, compression, deduplication, object caching, packet loss recovery, forward error correction, QoS, and related WAN acceleration techniques.

WISE-PaaS-RMM 33

WISE-PaaS/RMM 3.3: WISE-PaaS/RMM IoT by Advantech is a reliable, scalable, and extensible IoT device management platform that bridges layers in Advantech IoT platform architecture, including IoT device, system, and cloud layers.

Wordpress With Ubuntu Server 1604 Lts

WordPress with Ubuntu Server 16.04 Lts: WordPress is a free and open source content management system based on PHP and MySQL. Features include a plugin architecture and a template system.

Consulting Services

AI Roadmap - 1 Day Brief

AI Roadmap - 1 Day Brief: This one-day briefing from StrategyWise will illustrate why you should use Azure AI tools with industry-specific case studies showing the value you can expect from digital transformation, prescriptive modeling, and AI applications built on Azure.

AI Roadmap - 3 Week Assessment

AI Roadmap - 3 Week Assessment: StrategyWise's three-week assessment will provide you with a comprehensive blueprint for executing successful AI projects on the Azure stack, helping you to drive organizational change and process improvements.

AI Roadmap - 5 Day Workshop

AI Roadmap - 5 Day Workshop: This five-day workshop from StrategyWise will help you identify prime opportunities in your organization to drive organizational change and process improvements through artificial intelligence powered by Azure.

Analytics Roadmap - 1 Day Brief

Analytics Roadmap - 1 Day Brief: StrategyWise will illustrate why you should democratize analytics with industry-specific case studies showing the value you can expect from digital transformation, prescriptive modeling, and AI applications built on Azure.

Analytics Roadmap - 3 Week Assessment

Analytics Roadmap - 3 Week Assessment: Looking to launch an advanced analytics initiative on Azure? This three-week assessment will provide a blueprint for executing successful analytics projects on Azure, helping you to drive organizational change and process improvements.

Analytics Roadmap - 5 Day Workshop

Analytics Roadmap - 5 Day Workshop: This five-day workshop from StrategyWise will help you identify prime opportunities in your organization to drive organizational change and process improvements through analytics democratization powered by Azure.

App Modernization using App service 10 Weeks Imp

App Modernization using App service 10 Weeks Imp.: Build, deploy, and scale modern web, mobile, and API apps using Azure App Service. This service includes architecture design, engineering, and deploying applications in the Azure environment.

Application Portfolio Assessment 6 Weeks

Application Portfolio Assessment: 6 Weeks: Cloudreach's six-week Application Portfolio Assessment with Cloudamize provides enterprises who want to migrate to Azure with a comprehensive migration strategy and a high-level estimate of run and build costs.

Assessment for Modern DataCenter - 4 weeks

Assessment for Modern Datacenter - 4 weeks: Sonata's four-week assessment service advises customers and recommends a roadmap to build a datacenter in the Azure cloud. This service includes analyzing the feasibility of moving existing datacenter infrastructure to Azure.

Azure application Modernization Assessment - 2 weeks

Azure Application Modernization Assessment - 2 weeks: Sonata's two-week assessment service advises customers on application modernization options and recommends a roadmap to modernize legacy applications in the Azure cloud.

Azure datacenter Modernization 8 weeks Imp

Azure Datacenter Modernization 8 weeks Imp: Sonata's eight-week implementation will migrate and establish a modern datacenter in the Azure cloud. The service will provision datacenter resources and migrate data, databases, and applications to Azure.

Azure IoT 3-Day Proof of Concept

Azure IoT: 3-Day Proof of Concept: This three-day engagement from Tallan will educate your team on what is possible in Azure IoT Hub and build out your POC utilizing Azure IoT services and Power BI.

Azure IoT 3-Day Workshop

Azure IoT: 3-Day Workshop: Tallan's three-day workshop includes presentations, stakeholder interviews, analysis, demos, and hands-on learning to help you create a technical strategy for your IoT solution.

Azure Managed Services 8 Week Implementation

Azure Managed Services: 8 Week Implementation: Cloudreach Cloud Core delivers service management of your cloud platform through monitoring, configuration, troubleshooting, security services, delivery management, and continual service improvement.

Azure Migration 1-day Assessment

Azure Migration: 1-day Assessment: Atmosera's cloud assessment delivers a clear roadmap with options to evaluate workloads and performance data, prioritize business needs, and understand trade-offs when migrating to Azure.

Azure MSP Powered by CLIP 6-Wk Assessment

Azure MSP Powered by CLIP: 6-Wk Assessment: Brillio Azure Managed Services Provider (MSP) Powered by CLIP offers 360-degree coverage to enterprises throughout their cloud journey – from assessment to managing the cloud environment.

Big Data Roadmap - 1 Day Briefing

Big Data Roadmap - 1 Day Briefing: This StrategyWise briefing will illustrate why you should leverage big data with industry-specific case studies showing the value you can expect from digital transformation, prescriptive modeling, and AI applications in the Azure environment.

Big Data Roadmap - 5 Day Workshop

Big Data Roadmap - 5 Day Workshop: Looking to ramp up on big data powered by Azure? This five-day workshop will help you identify prime opportunities in your organization to drive organizational change and process improvements through big data on Azure.

Business Continuity Disaster recovery 2 Weeks Imp

Business Continuity Disaster Recovery 2 Weeks Imp.: This two-week implementation helps customers set up business continuity planning and disaster recovery on Azure. The service includes setting up BCP/DR environments in Azure and configuring apps and databases.

Connected Factory by APEx

Connected Factory by APEx: Cognizant APEx is an Industry 4.0 solution accelerator that enables the integration of devices, systems, and processes powered by the Azure IoT cloud platform to build a connected factory for optimized and enhanced operations.

Data and Analytics Strategy 1-day Workshop

Data and Analytics Strategy: 1-day Workshop: RevGen Partners' one-day interactive workshop introduces success with Azure for data and analytics, a review of current capabilities, and a high-level strategy and roadmap toward maturity.

Data Estate Modernisation 1 Day Workshop

Data Estate Modernisation: 1 Day Workshop: Northdoor's initial one-day workshop for technical and business leaders will assess your existing data estate and provide a roadmap to modernize your data platform (hybrid or full Azure) and licensing model.

DevOps Assessment 1-Week Assessment

DevOps Assessment: 1-Week Assessment: Create DevOps pipeline best practices for Azure DevOps, walk through current DevOps environments and action items needed to move to Azure DevOps, and create and use test workloads as a POC in this assessment from Tallan.

DevOps Implementation 3-Week Implementation

DevOps Implementation: 3-Week Implementation: Tallan will work with you to get all your applications using the same build automation mechanisms for Azure DevOps and ultimately help you build your DevOps pipeline strategy.

Discovery Free 2 hours Workshop

Discovery Free 2 hours Workshop: In this free workshop, Cloocus will investigate your current IT operation system, gather requirements, introduce methodology and references, and propose a fitting cloud roadmap.

Employee Experience Work Teams Jumpstart

Employee Experience @Work: Teams Jumpstart: The Cognizant Jumpstart for Microsoft Teams helps you successfully deploy and get immediate business value from this powerful platform using Azure bots, functions, and other services.

Free 1-2 Day Cloud Economic Assessment

Free 1/2 Day Cloud Economic Assessment: Blue Silver Shift will deliver a half-day workshop with your leadership team to go through digital transformation and the cloud, understanding your business, and building business goals and outcomes.

Free Azure Cost Optimization 1-day Assessment

Free Azure Cost Optimization: 1-day Assessment: ProArch's one-day assessment will analyze all workloads you are using and provide a recommendation report detailing how you can reduce your cloud cost by 30–60 percent or more by moving to Microsoft Azure.

Health Content Manage & Localize - 4-hr Assessment

Health Content: Manage & Localize - 4-hr Assessment: Lionbridge will review your content creation and localization process, content types, linguistic needs, regulatory requirements, current pain points, and volumes to develop a custom end-to-end solution.

IBM Domino Migration to MS Azure 2-Day Assessment

IBM Domino Migration to MS Azure 2-Day Assessment: The Point Alliance methodology, industry-standard migration tools, unique intellectual property, onsite and remote consultants, and proven track record combine to mitigate risk and ensure a successful Azure migration.

Launch IT Lifecycle Mgmt 10-Week Implementation

Launch IT Lifecycle Mgmt: 10-Week Implementation: Launch is a collection of IT lifecycle management services designed to make IT organizations more efficient through a unique combination of people, processes, tools, and automation.

Legislative Management Consulting Svcs 10-Wk Imp

Legislative Management Consulting Svcs: 10-Wk Imp: This service is a great way to migrate from legacy systems to solutions on Azure. Tallan will work with IT and business users to enable disaster recovery and insightful data visualizations while reducing manual effort.

Machine Learning Discovery Study 4-wk Assessment

Machine Learning Discovery Study: 4-wk Assessment: The Data Analysis Bureau will guide you on your data and analytics journey through its Discovery Study, bringing industry and domain best practice and insight to your business.

Managed Services

Managed Services: Capgemini’s Enterprise Portfolio Modernization initiative is a suite of services that aligns application lifecycle and modernization capabilities with Microsoft Azure to offer an end-to-end approach to digital transformation with enterprise capabilities.

Microsoft Azure Cloud Migration 1-Hour Briefing

Microsoft Azure Cloud Migration: 1-Hour Briefing: Are you considering a cloud migration or just want a better understanding of Microsoft Azure? Utilize Plc will help you understand the capabilities of Azure, including Azure Backup, Azure Site Recovery, and security features.

Modernization Blueprint (Small) 3 Week Assessment

Modernization Blueprint (Small): 3 Week Assessment: The Modernization Blueprint provides expert analysis and Azure-specific recommendations across the modernization journey. Deliverables include an implementation plan, strategic vision, and a comprehensive proposal and playbook.

OneMigrate

OneMigrate: Sogeti can reduce cloud migration efforts by 40 percent with OneMigrate, an automated platform plugged in with Azure Site Recovery for server migration and CloudBoost library for environment provisioning.

People Analytics Data Platform 2-Wk Implementation

People Analytics Data Platform: 2-Wk Implementation: This People Analytics solution from Tallan offers insight into the information you likely already have about your employees so that you can identify trends in attrition, helping you retain your top talent.

Predictive Analytics, ML, AI POC 1 week

Predictive Analytics, ML, AI: POC 1 week+: Quadbase Systems offers this one-week proof of concept to demonstrate use cases for predictive analytics and machine learning on Azure ML. You will learn how to apply techniques to improve your business performance.

SQL Server Migration 3-Day Assessment

SQL Server Migration: 3-Day Assessment: CSW offers this migration assessment to help you move your on-premises SQL Server workloads to Azure SQL Database. You will receive an assessment document, suggested cloud architecture, and migration plan.

SQL Server Migration 4-Week Implementation

SQL Server Migration: 4-Week Implementation: After your migration assessment, CSW can carry out the plan to move your on-premises database to Azure SQL Database. CSW engineers will ensure your SQL workload runs flawlessly in Microsoft’s cloud environment.

Telstra Cloud Sight

Telstra Cloud Sight: Telstra Cloud Sight is an automated orchestration platform that enables you to configure your cloud accounts easily and keep them compliant, secure, and optimized – all aligned to your chosen best practice blueprints and with minimal human intervention.

Telstra Managed Public Cloud

Telstra Managed Public Cloud: Readify will install its cloud management layer atop your cloud infrastructure, enabling its expert team to effectively perform day-to-day management, monitoring, and essential security-related activities.

Website Migration - IaaS 4-Week Implementation

Website Migration - IaaS: 4-Week Implementation: CSW will migrate your website to Microsoft Azure, allowing you to capitalize on reliable cloud hosting services and scalability. This implementation includes moving all assets, SSL certificates, domains, databases, and more.

Website Migration - PaaS 2-Week Implementation

Website Migration - PaaS: 2-Week Implementation: CSW will migrate your website to Microsoft Azure, allowing you to capitalize on reliable cloud hosting services and scalability. This implementation includes moving all assets, SSL certificates, domains, databases, and more.

Website Migration 2-Day Assessment

Website Migration: 2-Day Assessment: This assessment from CSW will help you review your website architecture, platform, infrastructure, performance, security, backup, and recovery and then establish the necessary Microsoft Azure services for a cloud implementation.

Adobe Acrobat chooses Microsoft 365 for built-in app protection

Dell and Lenovo to announce their latest PCs at IFA

Refactoring made easy with IntelliCode!

$
0
0

Have you ever found yourself refactoring your code and making the same or similar changes in multiple locations? Maybe you thought about making a regular expression so you could search and replace, but the effort to do that was too great? Eventually you probably resigned yourself to the time-intensive, error prone task of going through the code manually.

What if your developer tools could track your edits and learn about the repeatable changes you were making? After only a couple of examples they would spot you doing something repetitive and offer to take the remaining actions for you? With Visual Studio 2019 version 16.3 Preview 3 we are happy to announce that refactorings can now be enhanced by IntelliCode. IntelliCode spots repetition quickly and suggests other places in your code where you might want to apply that same change, right in your IDE:

 

Try it now

Refactoring is a preview feature of IntelliCode, so when you get Visual Studio 2019 version 16.3 Preview 3  it will be off by default. Visit the Tools-Options page, IntelliCode General tab, Preview features area, and switch C# refactorings to “Enabled” to turn it on.

Once you change this setting, close any files you may have open, then restart Visual Studio:

How to turn on the refactorings feature in tools-options

How it works

Under the hood, IntelliCode looks at each of your edits as you type. It uses PROSE (PROgram Synthesis by Example) to synthesize generalized edit scripts that can take your code from the “before editing” state to the “after”. When IntelliCode discovers that it can apply one of these scripts elsewhere in your code (which can be based on as few as 2 examples in your code), we let you know via the Visual Studio in the margin or when hovering the affected code, and through green “squiggles”. The lightbulb offers actions to apply the refactorings for you. The underlying technology is similar to the Excel’s Flash Fill feature and is described in this research paper. More details will be presented at the upcoming OOPSLA 2019 conference.

This isn’t just tracking text changes – IntelliCode is aware of the syntactic structure of your code. This syntactic awareness allows it to detect cases where the variable names in your refactoring examples differ but the essential structure of the change is the same:

Illustration showing how IntelliCode can detect patterns syntactically and offer suggestions

If you don’t like a suggested change you can select the ignore option on the lightbulb, and we won’t bother you about that detected pattern again unless you recreate it.

Let us know what you think!

We would love to hear about your experiences as you try this new feature. Good or bad, they will help us improve. Please raise issues and comments via Visual Studio “report a problem” .  We’re interested to hear feedback about the recommendations themselves, the performance of the feature, or any capabilities you might be missing. When sending your feedback it would be really useful if you can share details of what was detected and what sort of edits you were making; we’ll follow up.

The post Refactoring made easy with IntelliCode! appeared first on The Visual Studio Blog.

Introducing open source Windows 10 PowerToys

$
0
0

Microsoft Windows PowerToysYesterday the Windows Team announced the first preview and code release of PowerToys for Windows 10. This first preview includes two utilities:

Many years ago there was PowerToys for Windows 95 and frankly, it's overdue that we have them for Windows 10 – and bonus points for being open source!

These tools are also open source and hosted on GitHub! Maybe you have an open source project that's a "PowerToy?" Let me know in the comments. A great example of a PowerToy is something that takes a Windows Features and turns it up to 11!

EarTrumpet is a favorite example of mine of a community "PowerToy." It takes the volume control and the Windows auto subsystem and tailors it for the pro/advanced user. You should definitely try it out!

As for these new Windows 10 Power Toys, here’s what the Windows key shortcut guide looks like:

PowerToys - Shortcut Guide

And here's Fancy Zones. It's very sophisticated. Be sure to watch the YouTube to see how to use it.

Fancy Zones

To kick the tires on the first two utilities, download the installer here.

The main PowerToys service runs when Windows starts and a user logs in. When the service is running, a PowerToys icon appears in the system tray. Selecting the icon launches the PowerToys settings UI. The settings UI lets you enable and disable individual utilities and provides settings for each utility. There is also a link to the help doc for each utility. You can right click the tray icon to quit the Power Toys service.

We'd love to see YOU make a PowerToy and maybe it'll get bundled with the PowerToys installer!

How to create new PowerToys

See the instructions on how to install the PowerToys Module project template.
Specifications for the PowerToys settings API.

We ask that before you start work on a feature that you would like to contribute, please read our Contributor's Guide. We will be happy to work with you to figure out the best approach, provide guidance and mentorship throughout feature development, and help avoid any wasted or duplicate effort.

Additional utilities in the pipeline are:

If you find bugs or have suggestions, please open an issue in the Power Toys GitHub repo.


Sponsor: Uno Platform is the Open Source platform for building single codebase, native mobile, desktop and web apps using only C# and XAML. Built on top of Xamarin and WebAssembly! Check out the Uno Platform tutorial!



© 2019 Scott Hanselman. All rights reserved.
     

Microsoft C++ Team at CppCon 2019

$
0
0

Microsoft @ CppCon

The Microsoft C++ team will have a booth and many talks covering a wide range of topics at CppCon 2019. Come say hi to our team outside Aurora D and attend our talks to learn what’s new in our tooling, dive into new features in the standard, and hear some exciting announcements!

We’ll also be running a survey on the C++ ecosystem and giving away an Xbox One S to one participant. We’ll put the link here when it’s available, but you’ll also be able to find it by stopping by the booth or coming to any of our talks:

Monday 16th

14:00 – 15:00:

Hello World From Scratch by Sy Brand and Peter Bindels

15:15 – 16:15:

Programming with C++ Modules: Guide for the Working Programmer by Gabriel Dos Reis

16:45 – 17:45:

Latest & Greatest in Visual Studio 2019 for C++ Developers by Sy Brand and Marian Luparu

Tuesday 17th

15:15 – 15:45:

What’s New in Visual Studio Code for C++ Development – Remote Development, IntelliSense, Build/Debug, vcpkg, and More! by Tara Raj

15:50 – 16:10:

(Ab)using Compiler Tools Summit by Réka Kovács

C++ Standard Library “Little Things by Billy O’Neal

Upgrade from “permissive C++” to “modern C++” with Visual Studio 2019 by Nick Uhlenhuth

Wednesday 18th

09:00 – 09:30:

How to Herd 1,000 Libraries by Robert Schumacher

14:00 – 15:00

C++ Sanitizers and Fuzzing for the Windows Platform Using New Compilers, Visual Studio, and Azure by Jim Radigan

Lifetime analysis for everyone by Gábor Horváth and Matthias Gehre

16:45 – 17:45:

Killing Uninitialized Memory: Prot by Joe Bialek and Shayne Hiet-Block

Thursday 19th

15:15 – 15:45:

Don’t Package Your Libraries, Write Packagable Libraries! (Part 2) by Robert Schumacher

16:45 – 17:45:

Floating-Point charconv: Making Your Code 10x Faster With C++17’s Final Boss by Stephan T. Lavavej

16:15 – 18:00:

De-fragmenting C++: Making Exceptions and RTTI More Affordable and Usable (“Simplifying C++” #6 of N) by Herb Sutter

The post Microsoft C++ Team at CppCon 2019 appeared first on C++ Team Blog.


Top Stories from the Microsoft DevOps Community – 2019.09.06

$
0
0

I am always grateful for the opportunity to publish this newsletter, as the community continues to surprise me with amazing stories of applying automation to improve both software and human lives. An alternative meaning for CI is, perhaps, continuous inspiration. For a weekly dose of CI, please check out this week’s stories, especially the last one!

Use Stryker for .NET code in Azure DevOps
Today, I was introduced to mutation testing with Stryker. Stryker creates deliberate “mutations” in your code to determine if your tests are effective enough to “kill” the “mutants. Mutation testing allows you to evaluate and increase the efficiency of your tests! Thank you, Rob Bos, for writing this post on using mutation testing with Stryker on a .NET Core application in Azure Pipelines.

Project-wide flaky test detection
To add a small, but great point on the topic of testing, this short post from Matteo Emili shows us how to enable project-wide flaky test detection. Flaky tests are highly problematic, since they can cause you to unnecessarily rewrite healthy code. With this simple opt-in, you can let Azure Pipelines detect if your tests are reporting consistent results after a number of runs with no code changes, and eliminate the flaky ones.

Continuously Integrating Angular with Azure DevOps
This post from Frederik Prijck features a detailed walkthrough of setting up an Angular CLI application Continuous Integration pipeline in Azure YAML Pipelines, including publishing visual test results and integrating a linter.

How to add user enabled feature flags to your Azure DevOps Extensions
Many organizations we work with extend Azure DevOps functionality with custom extensions. But what if you wanted your custom extensions to only be visible to some users and teams, or to even be opt-in only? Luckily, you can enable this functionality using a fairly straightforward configuration. This great post from Tiago Pascoal walks us through how to do just that.

Fighting fires with DevOps
And for those of you who could use a break from reading, here is a truly inspiring recording of a presentation on implementing Continuous Delivery and orchestration on a firetruck fleet from DevOpsDays New Zealand.
Ryan McCarvill is not only working for a great cause, and has a critical need for his fleet to be secure and function without a hitch in an emergency, he also shows us how to overcome lack of budget, organizational support, and resources with ingenuity and willingness to go back to the basics!

If you’ve written an article about Azure DevOps or find some great content about DevOps on Azure, please share it with the #AzureDevOps hashtag on Twitter!

The post Top Stories from the Microsoft DevOps Community – 2019.09.06 appeared first on Azure DevOps Blog.

Microsoft’s connected vehicle platform presence at IAA, the Frankfurt Auto Show

$
0
0

This post was co-authored by the extended Microsoft Connected Vehicle Platform (MCVP) team. 

A connected vehicle solution must enable a fleet of potentially millions of vehicles, distributed around the world, to deliver intuitive experiences including infotainment, entertainment, productivity, driver safety, driver assistance. In addition to these services in the vehicle, a connected vehicle solution is critical for fleet solutions like ride and car sharing as well as phone apps that incorporate the context of the user and the journey.

Imagine you are driving to your vacation destination and you start your conference call from home while you are packing. When you transition to the shared vehicle, the route planning takes into account the best route for connectivity and easy driving and adjusts the microphone sensitivity during the call in the back seat. These experiences today are constrained to either the center-stack screen, known as the in-vehicle infotainment device (IVI), or other specific hardware and software that is determined when the car is being built. Instead, these experiences should evolve over the lifetime of ridership. The opportunity is for new, modern experiences in vehicles that span the entire interior and systems of a vehicle, plus experiences outside the vehicle, to create deeper and longer-lasting relationships between car makers and their customers throughout the transportation journey.

To realize this opportunity, car manufacturers and mobility-as-a-service (MaaS) providers need a connected vehicle platform to complete the digital feedback loop by incorporating the seamless deployment of new functionality that is composed from multiple independently updatable services that reflect new understanding, at scale, and with dependable and consistent management of data and these services from Azure to and from three different edges: the vehicle, the phone, and the many enterprise applications that support the journey.

The Microsoft Connected Vehicle Platform (MCVP) is the digital chassis upon which automotive original equipment manufacturers (OEMs) can deliver value-add services to their customers. These services areas include:

  • In-vehicle experiences
  • Autonomous driving
  • Advanced navigation
  • Customer engagement and insights
  • Telematics and prediction services
  • Connectivity and over the air updates (OTA)

MCVP is a platform composed from about 40 different Azure services and tailored for automotive scenarios. To ensure continuous over-the-air (OTA) updates of new functionality, MCVP also includes different Azure edge technologies such as Automotive IoT Edge that runs in the vehicle, and Azure Maps for intelligent location services.

With MCVP, and an ecosystem of partners across the industry, Microsoft offers a consistent platform across all digital services. This includes vehicle provisioning, two-way network connectivity, continuous over-the-air updates of containerized functionality, support for command-and-control, hot, warm, or cold path for telematics, and extension hooks for customer or third-party differentiation. Being built on Azure, MCVP includes the hyperscale, global availability, and regulatory compliance that comes as part of the Azure cloud. OEMs and fleet operators leverage MCVP as a way to “move up the stack” and focus on their customers rather than spend resources on non-differentiating infrastructure.

Automotive OEMs already taking advantage of MCVP, along with many of our ecosystem partners, including the Volkswagen Group, the Renault-Nissan-Mitsubishi Alliance, and Iconiq.

In this blog post, we are delighted to recap many of the MCVP ecosystem partners that accelerate our common customers’ ability to develop and deploy completed connected vehicle solutions.

An image showing the aspects of the Microsoft Connected Vehicle Platform.

Focus areas and supporting partnerships

Microsoft’s ecosystem of partners include independent software vendors (ISVs), automotive suppliers, and systems integrators (SIs) to complete the overall value proposition of MCVP. We have pursued partnerships in these areas:

In-vehicle experiences

Cheaply available screens, increasingly autonomous vehicles, the emergence of pervasive voice assistants, and users’ increased expectation of the connectedness of their things have all combined to create an opportunity for OEMs to differentiate through the digital experiences they offer to the occupants, both the driver and the passengers, of their vehicles.

LG Electronics’ webOS Autoplatform offers an in-vehicle, container-capable OS that brings the third party application ecosystem created for premium TVs to In-vehicle experiences. webOSAuto supports the container-based runtime environment of MCVP and can be an important part of modern experiences in the vehicle.

Faurecia leverages MCVP to create disruptive, connected, and personalized services inside the Cockpit of the Future to reinvent the on-board experience for all occupants.

Autonomous driving

The continuous development of autonomous driving systems requires input from both test fleets and production vehicles that are integrated by a common connected vehicle platform. This is because the underlying machine learning (ML) models that either drive the car or provide assistance to the driver will be updated over time as they are improved based on feedback across those fleets, and those updates will be deployed over the air in incremental rings of deployment by way of their connection to the cloud.

Teraki creates and deploys containerized functionality to vehicles to efficiently extract and manage selected sensor data such as telemetry, video, and 3D information. Teraki’s product continuously trains and updates the sensor data to extract relevant, condensed information that enables customers’ models to achieve highest accuracy rates, both in the vehicle (edge) as well in Azure (cloud.)

TomTom is integrating their navigation intelligence services such as HD Maps and Traffic as containerized services for use in MCVP so that other services in the vehicles, including autonomous driving, can take advantage of the additional location context.

Advanced navigation

TomTom’s navigation application has been integrated with the MCVP in-vehicle compute architecture to enable navigation usage and diagnostics data to be sent from vehicles to the Azure cloud where the data can be used by automakers to generate data-driven insights to deliver tailored services, and to make better informed design and engineering decisions. The benefit of this integration includes the immediate insights created from comparing the intended route with the actual route with road metadata. If you are attending IAA, be sure to check out the demo at the Microsoft booth.

Telenav is a leading provider of connected car and location-based services and is working with Microsoft to integrate its intelligent connected-car solution suite, including infotainment, in-car commerce, and navigation, with MCVP.

Customer engagement and insights

Otonomo securely ingests automotive data from OEMs, fleet operators, etc., then reshapes and enriches the data so application and service providers can use it to develop a host of new and innovative offerings that deliver value to drivers. The data services platform has built it privacy by design solutions for both person and aggregate use cases. Through the collaboration with Microsoft, car manufacturers adopting the Microsoft Connected Vehicle Platform can easily plug their connected car data into Otonomo’s existing ecosystem to quickly roll out new connected car services to drivers.

Telematics and prediction services

DSA is a leading software and solutions provider for quality assurance, diagnostics, and maintenance of the entire vehicle electrics and electronics in the automotive industry. Together, DSA and Microsoft target to close the digital feedback loops between automotive production facilities and field cars by providing an advanced Vehicle Lifecycle Management, based on the Microsoft Connected Vehicle Platform.

WirelessCar is a leading managed service provider within the connected vehicle eco-system and empowers car makers to provide mobility services with Microsoft Azure and the Microsoft Connected Vehicle Platform that supports and accelerates their customers’ high market ambitions in a world of rapid changing business models.

Connectivity and OTA

Cubic Telecom is a leading connectivity management software provider to the automotive and IoT industries globally. They are one of the first partners to bring seamless connectivity as a core service offering to MCVP for a global market. The deep integration with MCVP allows for a single data lake and an integrated services monitoring path. In addition, Cubic Telecom provides connected car capabilities that let drivers use infotainment apps in real-time, connect their devices to the Wi-Fi hotspot, and top-up on data plans to access high-speed LTE connectivity, optionally on a separate APN.

Excelfore is an innovator in automotive over-the-air (OTA) updating and data aggregation technologies. They provide a full implementation of the eSync bi-directional data pipeline, which has been ported to the Microsoft Azure cloud platform and integrated as the first solution for MCVP OTA updating.

Tata Communications is a leading global digital infrastructure provider. We are working with them to help speed the development of new innovative connected car applications. By combining the IoT connectivity capabilities of Tata Communications MOVE™ with MCVP, the two companies will enable automotive manufacturers to offer consumers worldwide more seamless and secure driving experiences.

Microsoft is incredibly excited to be a part of the connected vehicle space. With the Microsoft Connected Vehicle Platform, our ecosystem partners, and our partnerships with leading automotive players – both vehicle OEMs and automotive technology suppliers – we believe we have a uniquely capable offering enabling at global scale the next wave of innovation in the automotive industry as well as related verticals such as smart cities, smart infrastructure, insurance, transportation, and beyond.

Explore the Microsoft Connected Vehicle Platform today and visit us at IAA.

Satellite connectivity expands reach of Azure ExpressRoute across the globe

$
0
0

Staying connected to access and ingest data in today's highly distributed application environments is paramount for any enterprise. Many businesses need to operate in and across highly unpredictable and challenging conditions. For example, energy, farming, mining, and shipping often need to operate in remote, rural, or other isolated locations with poor network connectivity.

With the cloud now the de facto and primary target for the bulk of application and infrastructure migrations, access from remote and rural locations becomes even more important. The path to realizing the value of the cloud starts with a hybrid environment access resources with dedicated and private connectivity.

Network performance for these hybrid scenarios from rural and remote sites becomes increasingly critical. With globally connected organizations, the explosive number of connected devices and data in the Cloud, as well as emerging areas such as autonomous driving and traditional remote locations such as cruise ships are directly affected by connectivity performance.  Other examples requiring highly available, fast, and predictable network service include managing supply chain systems from remote farms or transferring data to optimize equipment maintenance in aerospace.

Today, I want to share the progress we have made to help customers address and solve these issues. Satellite connectivity addresses challenges of operating in remote locations.

Microsoft cloud services can be accessed with Azure ExpressRoute using satellite connectivity. With commercial satellite constellations becoming widely available, new solutions architectures offer improved and affordable performance to access Microsoft.

Infographic of High level architecture of ExpressRoute and satellite integration

Microsoft Azure ExpressRoute, with one of the largest networking ecosystems in the public Cloud now includes satellite connectivity partners bringing new options and coverage.

 8095 1SES will provide dedicated, private network connectivity from any vessel, airplane, enterprise, energy or government site in the world to the Microsoft Azure cloud platform via its unique multi-orbit satellite systems. As an ExpressRoute partner, SES will provide global reach and fibre-like high-performance to Azure customers via its complete portfolio of Geostationary Earth Orbit (GEO) satellites, Medium Earth Orbit (MEO) O3b constellation, global gateway network, and core terrestrial network infrastructure around the world.

 8095 2Intelsat’s customers are the global telecommunications service providers and multinational enterprises that rely on our services to power businesses and communities wherever their needs take them. Now they have a powerful new tool in their solutions toolkit. With the ability to rapidly expand the reach of cloud-based enterprises, accelerate customer adoption of cloud services, and deliver additional resiliency to existing cloud-connected networks, the benefits of cloud services are no longer limited to only a subset of users and geographies. Intelsat is excited to bring our global reach and reliability to this partnership with Microsoft, providing the connectivity that is essential to delivering on the expectations and promises of the cloud.

8095 3 Viasat, a provider of high-speed, high-quality satellite broadband solutions to businesses and commercial entities around the world, is introducing Direct Cloud Connect service to give customers expanded options for accessing enterprise-grade cloud services. Azure ExpressRoute will be the first cloud service offered to enable customers to optimize their network infrastructure and cloud investments through a secure, dedicated network connection to Azure’s intelligent cloud services.

Microsoft wants to help accelerate scenarios by optimizing the connectivity through Microsoft’s global network, one of the largest and most innovative in the world.

ExpressRoute for satellites directly connects our partners’ ground stations to our global network using a dedicated private link. But what does it more specifically mean to our customers?

  • Using satellite connectivity with ExpressRoute provides dedicated and highly available, private access directly to Azure and Azure Government clouds.
  • ExpressRoute provides predictable latency through well-connected ground stations, and, as always, maintains all traffic privately on our network – no traversing of the Internet.
  • Customers and partners can harness Microsoft’s global network to rapidly deliver data to where it’s needed or augment routing to best optimize for their specific need.
  • Satellite and a wide selection of service providers will enable rich solution portfolios for cloud and hybrid networking solutions centered around Azure networking services.
  • With some of the world’s leading broadband satellite providers as partners, customers can select the best solution based on their needs. Each of the partners brings different strengths, for example, choices between Geostationary (GEO), Medium Earth Orbit (MEO) and in the future Low Earth Orbit(LEO) satellites, geographical presence, pricing, technology differentiation, bandwidth, and others.
  • ExpressRoute over satellite creates new channels and reach for satellite broadband providers, through a growing base of enterprises, organizations and public sector customers.

    With this addition to the ExpressRoute partner ecosystem, Azure customers in industries like aviation, oil and gas, government, peacekeeping, and remote manufacturing can deploy new use cases and projects that increase the value of their cloud investments and strategy.

    As always, we are very interested in your feedback and suggestions as we continue to enhance our networking services, so I encourage you to share your experiences and suggestions with us.

    You can follow these links to learn more about our partners Intelsat, SES, and Viasat, and learn more about Azure ExpressRoute from our website and our detailed documentation.

    Microsoft Azure available from new cloud regions in Germany

    $
    0
    0

    Frankfurt Germany city skyline.

    Deutsche Bank, Deutsche Telekom, SAP, and others trust Microsoft for their digital transformations

    Today, we’re announcing the availability of Azure in our new cloud regions in Germany. These new regions and our ongoing global expansion are in response to customer demand as more industry leaders choose Microsoft’s cloud services to further their digital transformations. As we enter new markets, we work to address scenarios where data residency is of critical importance, especially for highly regulated industries seeking the compliance standards and extensive security offered by Azure.

    Additionally, Office 365—the world’s leading cloud-based productivity solution—and Dynamics 365 and Power Platform, the next generation of intelligent business applications and tools, will be offered from these new cloud regions to advance even more customers on their cloud journeys.

    Trusted Microsoft cloud services

    Microsoft cloud services delivered from a given geography, such as our new regions in Germany, offer scalable, highly available, and resilient cloud services while helping enterprises and organizations meet their data residency, security, and compliance needs. We have deep expertise protecting data and empowering customers around the globe to meet extensive security and privacy requirements by offering the broadest set of compliance certifications and attestations in the industry. We also have a history of collaborating with customers to navigate evolving business needs, including delivering innovative strategies to help customers accelerate their European Union General Data Protection Regulation (GDPR) compliance.

    Addressing the evolving needs of German customers

    In Germany, companies across industries are adopting cloud technology amidst a changing regulatory framework that includes GDPR and a need for in-country data residency. Cloud services are becoming a key driver of product development, business model creation, and international stage competition. Responding to these changes, we’ve evolved our cloud strategy to better enable the digital transformation of our German customers.

    Azure is now available from our new cloud datacenter regions in Germany to provide customers and partners with greater flexibility, the latest intelligent cloud services, full connectivity to our global cloud network, and data residency within Germany. The new regions with German-specific compliance, including Cloud Computing Compliance Controls Catalogue (C5) attestation, and will remove barriers so in-country companies can benefit from the latest solutions such as containers, IoT, and AI. These customers include:

    • Deutsche Bank, Germany’s leading bank, is leveraging our cloud services to accelerate the innovation of financial products and services while maintaining high-quality service and data security. With our collaboration, Deutsche Bank has developed a data platform that meets both international and local regulatory requirements while offering customers secure and cost-efficient services.
    • Deutsche Telekom, one of the world's leading integrated telecommunications companies, will play an integral role in onboarding customers to our new cloud regions in Germany.
    • SAP, the market leader in enterprise application software, will combine Microsoft Azure and SAP HANA Enterprise Cloud to provide solutions directly from Germany—for the "Intelligent Enterprise in the Intelligent Cloud."
    • Arvato Systems, a global IT specialist and multi-cloud service provider, is now able to offer their customers fully integrated Azure services with data retention in Germany, empowering the digital transformation of German medium-sized companies.

    These investments help us deliver on our continued commitment to serve our customers, reach new ones, and elevate their businesses through the transformative capabilities of the Microsoft Azure cloud platform.

    Please contact your Microsoft representative to learn more about opportunities in Germany or follow this link to learn about Microsoft Azure.

    Azure HPC Cache: Reducing latency between Azure and on-premises storage

    $
    0
    0

    Today we’re previewing the Azure HPC Cache service, a new Azure offering that empowers organizations to more easily run large, complex high-performance computing (HPC) workloads in Azure. Azure HPC Cache reduces latency for applications where data may be tethered to existing data center infrastructure because of dataset sizes and operational scale.

    Scale your HPC pipeline using data stored on-premises or in Azure. Azure HPC Cache delivers the performant data access you need to be able to run your most demanding, file-based HPC workloads in Azure, without moving petabytes of data, writing new code, or modifying existing applications.

    For users familiar with the Avere vFXT for Azure application available through the Microsoft Azure Marketplace, Azure HPC Cache offers similar functionality in a more seamless experience—meaning even easier data access and simpler management via the Azure Portal and API tools. The service can be driven with Azure APIs and is proactively monitored on the back end by the Azure HPC Cache support team and maintained by Azure service engineers. What is the net benefit? The Azure HPC Cache service delivers all the performance benefits of the Avere vFXT caching technology at an even lower total cost of ownership.

    Azure HPC Cache works by automatically caching active data in Azure that is located both on-premises and in Azure, effectively hiding latency to on-premises network-attached storage (NAS), Azure-based NAS environments using Azure NetApp Files or Azure Blob Storage. The cache delivers high-performance seamless network file system (NFSv3) access to files in the Portable Operating System Interface (POSIX) compliant directory structures. The cache can also aggregate multiple data sources into an aggregated name space to present a single directory structure to clients. Azure compute clients can then access data as though it all originated on a single NAS filer.

    Ideal for cloud-bursting applications or hybrid NAS environments, Azure HPC Cache lets you keep your data on existing datacenter-resident Azure NetApp or Dell EMC Isilon arrays. Whether you need to store data on premises while you develop your cloud strategy for security and compliance reasons, or because you simply have so much data on-premises that you don’t want to move it, you can still take full advantage of Azure compute services and do it sooner, rather than later. Once you are ready or able to shift data to Azure Storage resources, you can still run file-based workloads with ease. Azure HPC Cache provides the performance you need to lift and shift your pipeline.

    Azure HPC Cache provides high-performance file caching for HPC workloads running in Azure.  

    To the cloud in days, not months

    Combined with other Azure services such as the Azure HB- and HC-series virtual machines (VMs) for HPC and the Azure CycleCloud HPC workload manager, Azure HPC Cache lets you quickly reproduce your on-premise environment in the cloud and access on-premise data without committing to a large-scale migration. You can also expect to run your HPC workloads in Azure at performance levels similar to your on-premises infrastructure.

    Azure HPC Cache service is easy to initiate and manage from the Azure Portal. Once your network has been set up and your on-premises environment has IP connectivity to Azure, you can typically turn on Azure HPC Cache service in about ten minutes. Imagine being able to do HPC jobs in days rather than waiting for months while your IT team fine-tunes data migration strategies and completes all required data moves and synchronization processes.

    From burst to all-in: Your choice, your pace

    The high-performance Azure HPC Cache delivers the scale-out file access required by HPC applications across an array of industries, from finance to government, life sciences, manufacturing, media, and oil and gas. The service is ideally suited for read-heavy workloads running on 1,000 to 50,000 compute cores. Because Azure HPC Cache is a metered service with usage charges included on your Azure bill, you can turn it off—and stop the meter—when you’re done.

    In demanding workloads, Azure HPC Cache provides efficient file access to data stored on-premises or in Azure Blob and can be used with cloud orchestration technologies for management.

    Azure HPC Cache helps HPC users access Azure resources more simply and economically. You can deliver exactly the performance needed for computationally intensive workloads, in time to meet demand. Start by using Azure capacity for short-term demand, and enabling a hybrid NAS environment, or go all-cloud and make Azure your permanent IT infrastructure. Azure HPC Cache provides the seamless data access you need to leverage cloud resources in a manner and at a pace that suits your unique business needs and use cases.

    Proven technology maintained by Azure experts

    Azure HPC Cache service is the latest innovation in a continuum of high-performance caching solutions built on Avere Systems FXT Edge Filer foundational technology. Who uses this technology? A diverse, global community that includes post-production studio artists in the UK, weather researchers in Poland, animators in Toronto, investment bankers in New York City, bioinformaticists in Cambridge and Switzerland, and many, many more of the world’s most demanding HPC users. Azure HPC Cache combines this most sought-after technology with the technical expertise and deep-bench support of the Microsoft Azure team.

    Can’t wait to try it?

    Ready to get off the sidelines and start running your HPC workloads in Azure? We have a few opportunities for customers to preview Azure HPC Cache. Just complete a short survey, and we’ll review your submission for suitability.

    The Azure HPC Cache team is committed to helping deliver on Microsoft’s “Cloud for all” mission and will work with you to design a cloud that you can use to quickly turn your ideas into solutions. Have questions? Email them to AzureHPCCache@microsoft.com.

    Announcing the new version of Microsoft To Do—we’ve come a long way!

    GC Perf Infrastructure – Part 0

    $
    0
    0

    In this blog entry and some future ones I will be showing off functionalities that our new GC perf infrastructure provides. Andy and I have been working on it (he did all the work; I merely played the consultant role). We will be open sourcing it soon and I wanted to give you some examples of using it and you can add these to your repertoire of perf analysis techniques when it’s available.

    The general GC perf analysis flow on a customer scenario usually goes like this –

    1) get a perf trace (ETW trace on Windows and event trace on Linux);

    2A) if the customer has no complains and just wanted to see if they can improve things, we first get a general idea of things and see if/how they can be improved or

    2B) if the customer does have specific complains (eg, long GC pauses, or too much memory used) we look for things that could cause them.

    Of course, as any experienced perf person would know, perf analysis can vary greatly from one case to the next. You look at some data to get some clues to identify the suspicious areas and focus on those areas and get more clues…and usually make incremental progress before you get to the root cause.

    To give some context, for data we get from trace, we have a library named TraceEvent that parses it into TraceGC objects. Since GC is per process, each process (that observed at least one GC) will get its list of these TraceGC objects. And the TraceGC type includes information on this GC such as

    • Basic things which are read directly from some GC event’s fields like Number (index of this GC), Generation (which generation this GC collected), PerHeapHistories (which includes a lot of info such as what condemned reasons this heap incurred, the generation data for each generation for this heap)
    • Processed info like PauseDurationMSec (for ephemeral GCs this is the difference between the timestamp of the SuspendEEStart event and the RestartEEStop event);
    • Info that gets light up when you have the additional events, eg, GCCpuMSec if you have CPU samples collected in the trace;

    So given a basic GC trace there’s already a ton of info you can get. We do some processing in TraceEvent and a perf analysis means looking at info these TraceGC objects give you in really any number of ways.

    If we have a big trace, the time that it took to parse the trace to get these TraceGC objects can be very long. And if I change my analysis code I’d have to start my analysis process again which means I’d have to parse the trace again. This seemed very inefficient. I started searching for options that could persist the TraceGC objects and I can just modify my code to consume them without having to reprocess the trace. And I found Jupyter Notebook which allows you to edit python code in individual cells but the results from them persist in memory and can be used by any other cells. And I found pythonnet which allows you to interop with a c# lib from python. This means I could persist the TraceGC objects in memory and edit the code that I want to look at any of these objects in any way I desire and don’t need to reprocess the trace at all. This along with the nice charting capability from python gave me exactly what I needed. So what I would do is to have one cell that just did the trace processing with all the resulting TraceGC objects, and other cells for various ways to look at info in these objects and edit them whenever I needed.

    This was several years ago and I’ve been using it since. When Andy joined the team and started working on a new GC perf infra I asked him to adapt this as part of our infra. And today I looked at a trace with him and below is what we did.

    In this case I just got the trace from a customer to see if we can find anything can be improved by something the customer can do or we did but in a release the customer hasn’t upgraded to, or we are doing/planning to do. To started with, we looked at 3 metrics – individual GC pause times, GC speed (ie, what GC promoted / GC pause) and heap size after each GC, just to get a general idea. We used the histogram charts for these:

     

    *NGC means NonConcurrent GC, I didn’t want to call it BGC (Blocking GC) because we already have BGC mean Background GC.

    If you look at the PauseDurationMSec charts for gen0 GCs (NGC0), gen1 GCs (NGC1) and BGCs (there were no full blocking GCs in this trace), most of them were in the range of a few to 20ms. But there are definitely some longer ones, eg some that are between 75 and 100ms in the NGC0 chart. And right off the bat we see some outliers for BGC – most of them are < 40ms but then there are some > 100ms! And it’s hard to see on the charts but there are actually some thin blue lines in the > 100ms range in both the NGC0 and NGC1 charts.

    Since we are using Jupyter, we just changed the code in the cell for this and only showed GCs with the PauseDurationMSec > 50ms and redrew the charts – now it’s very clear there are some ephemeral GCs > 100ms pauses and we can see the 1 long BGC pause is 114.2ms.

     

    And we can see the GC speed (the PromotedMBPerSec charts) is very low for them. In the group of charts with all GCs, we see many with hundreds for PromotedMBPerSec. But these are really low – < 16 for PromotedMBPerSec.

    If the long GCs’s PromotedMBPerSec was not so low it would mean they simply had a lot more memory to promote which would indicate a GC tuning problem – one very likely reason would be we are not setting the allocation budgets correctly.

    But since that’s not the case we wanted to see why these GC’s speed was so low – we spent a long time paused but GC was not able to do work at its normal speed.

    Let’s concentrate on the gen0 GCs (NGC0) first as a starting point. We know the way PauseDurationMSec is calculated so it consists of suspending EE (SuspendDurationMSec)+ actual GC work + resuming EE. Resuming the EE generally takes very little time so I’ll not look at it first. I wanted to see if suspension was too long. So we looked at NGC0’s pause and suspension with our table printing function. Since it’s so easy we’ll throw in the total promoted mb and the GC speed and let’s sort by PauseDurationMSec (highest first):

     

    “pause msec” is PauseDurationMSec.

    “suspend msec” is SuspendDurationMSec.

    “promoted mb” is PromotedMB for this GC.

    (I’m only showing the top few for brevity)

    Right away we see some long suspension times – the GCs that took 156ms and 101ms, spent 95.4ms and 49.4ms in suspension respectively. So that definitely shows a suspension issue. But the other GCs, like the longest one that took 187ms spent very little time in suspension.

    We do have another field in the TraceGC class called DurationMSec which is the difference between the timestamp for the GCStart event and the GCStop event. From the first glance this should just be PauseDurationMSec – suspending EE – resuming EE. Almost – there’s a bit of work we have to do between SuspendEEStop and GCStart, and between GCStop and RestartEEStart. So if things work as expected, (PauseDurationMSec – DurationMSec) should be almost the same as (suspending EE + resuming EE). We changed the code again to add a DurationMSec column (“duration msec”) and we sort by that column

     

    The longest GC (187ms) has only 8.73ms DurationMSec! And Suspend only took 0.0544ms. So there’s a huge difference between PauseDurationMSec and DurationMSec, not accounted by the suspension cost.

    We modified the code again to add a few more columns, mainly the “pause to start” column which is the difference between the timestamp of SuspendEEStart and GCStart so it includes the suspension time. We also calculate a “pause %” column which is (“suspend msec” / “pause to start” * 100) and a “suspend %” column which is (“suspend msec” / “pause msec” * 100). Also we changed the “promoted mb/sec” column to use DurationMSec instead of PauseDurationMSec. Now some rows in the table change very drastically. The table is sorted by the “pause %” column.

     

    The GC that took 187ms spent 178ms from SuspendEEStart to GCStart!!! And of course the GC speed (promoted mb/sec) is now a lot higher.

    This is enough evidence to tell us that the GC threads are getting severely interfered, could be from other processes or other threads in this process. We’d need to collect more events to diagnose further.

     

    The post GC Perf Infrastructure – Part 0 appeared first on .NET Blog.


    Say hello to the new Visual Studio terminal!

    $
    0
    0

     

    Building on the momentum from the recently announced Developer PowerShell, we are excited to share the first preview of the new Visual Studio terminal. This new preview experience is part of Visual Studio version 16.3 Preview 3.

     

    Rather than build everything from scratch, the Visual Studio terminal shares most of its core with the Windows Terminal. For you, that translates into a more robust terminal experience, and faster adoption of new functionality.

     

    Enabling the new Visual Studio terminal

    To try the terminal preview, you’ll first need to enable it by visiting the Preview Features page. Go to Tools > Options > Preview Features, enable the Experimental VS Terminal option and restart Visual Studio.

    Once enabled, you can invoke it via the View > Terminal Window menu entry or via the search.

    Creating Terminal profiles

    Launching the terminal automatically opens an integrated PowerShell instance. However, you can customize the startup experience by using shell profiles.

    With shell profiles, you can target different types of shells, invoke them using unique arguments, or even set a default shell that better fits your needs.

    In future updates, we plan to optimize the experience by pre-populating the terminal with a few basic profiles. In the meantime, you can manually add additional profiles on the terminal’s Options page.

     

    As an example, here’s how you can set profiles for some popular options:

    Developer Command Prompt

    Shell location:
    C:WindowsSystem32cmd.exe
    Arguments:
    /k "C:Program Files (x86)Microsoft Visual Studio2019IntPreviewCommon7ToolsVsDevCmd.bat"

    Developer PowerShell

    Shell location:
    C:WINDOWSsystem32WindowsPowerShellv1.0powershell.exe
    Arguments:
    -NoExit -Command "& { Import-Module 'C:Program Files (x86)Microsoft Visual Studio2019Preview_masterCommon7ToolsvsdevshellMicrosoft.VisualStudio.DevShell.dll'; Enter-VsDevShell -InstanceId f86c8b33}"

    Note: You’ll need to update the above argument to match your specific configuration. You can extract the argument information by looking into the Target string for the Developer PowerShell shortcut.

    WSL

    Shell location:
    C:WINDOWSsysnativewsl.exe

     

    Try it out and let us know what you think!

    While we are excited to share this preview, we want to ensure a solid experience before we enable this experience in the release version of Visual Studio. As a result, the terminal will initially only be available in the preview versions of Visual Studio 2019.

    As next steps, we’ll look to deliver improvements around rendering (the terminal currently needs to be resized to render correctly), accessibility and theming. We’ll also add new productivity boosters such as multiple terminal instances and deeper integration with Visual Studio.

    We’d love to know how it fits your workflow and how we could further improve your terminal experience. Send us your feedback via the Developer Community portal, or via the Help > Send Feedback feature inside Visual Studio.

    The post Say hello to the new Visual Studio terminal! appeared first on The Visual Studio Blog.

    Monitoring on Azure HDInsight part 4: Workload metrics and logs

    $
    0
    0

    This is the fourth blog post in a four-part series on monitoring on Azure HDInsight. Monitoring on Azure HDInsight Part 1: An Overview discusses the three main monitoring categories: cluster health and availability, resource utilization and performance, and job status and logs. Part 2 is centered on the first topic, monitoring cluster health and availability. Part 3 discussed monitoring performance and resource utilization. This blog covers the third of those topics, workload metrics and logs, in more depth.


    During normal operations when your Azure HDInsight clusters are healthy and performing optimally, you will likely focus your attention on monitoring the workloads running on your clusters and viewing relevant logs to assist with debugging. Azure HDInsight offers two tools that can be used to monitor cluster workloads: Apache Ambari and integration with Azure Monitor logs. Apache Ambari is included with all Azure HDInsight clusters and provides an easy-to-use web user interface that can be used to monitor the cluster and perform configuration changes. Azure Monitor collects metrics and logs from multiple resources such as HDInsight clusters, into an Azure Monitor Log Analytics workspace. An Azure Monitor Log Analytics workspace presents your metrics and logs as structured, queryable tables that can be used to configure custom alerts. Azure Monitor logs provide an excellent overall experience for monitoring workloads and interacting with logs, especially if you have multiple clusters.

    Azure Monitor logs

    Azure Monitor logs enable data generated by multiple resources such as HDInsight clusters to be collected and aggregated in one place to achieve a unified monitoring experience. As a prerequisite, you will need a Log Analytics workspace to store the collected data. If you have not already created one, you can follow these instructions for creating an Azure Monitor Log Analytics workspace. You can then easily configure an HDInsight cluster to send a host of logs and metrics to Azure Monitor Log Analytics.

    HDInsight monitoring solutions

    Azure HDInsight offers pre-made, monitoring dashboards in the form of solutions that can be used to monitor the workloads running on your clusters. There are solutions for Apache Spark, Hadoop, Apache Kafka, live long and process (LLAP), Apache HBase, and Apache Storm available in the Azure Marketplace. Please see our documentation to learn how to install a monitoring solution. These solutions are workload-specific, allowing you to monitor metrics like  central processing unit (CPU) time, available YARN memory, and logical disk writes across multiple clusters of a given type. Selecting a graph takes you to the query used to generate it, shown in the logs view.

    An example of the job graph showing stages 0 through 3 for a spark job.

     

    The HDInsight Spark monitoring solutions provide a simple pre-made dashboard where you can monitor workload-specific metrics for multiple clusters on a single pane of glass.

    The pre-made dashboard for Kafka we offer as part of HDInsight for monitoring Kafka clusters.

    The HDInsight Kafka monitoring solution enables you to monitor all of your Kafka clusters on a single pane of glass.

    Query using the logs blade

    You can also use the logs view in your Log Analytics workspace to query the metrics and tables directly.

    HDInsight clusters emit several workload-specific tables of logs, such as log_resourcemanager_CL, log_spark_CL, log_kafkaserver_CL, log_jupyter_CL, log_regionserver_CL, and log_hmaster_CL.

    On the metrics side, clusters emit several metrics tables, including metrics_sparkapps_CL, metrics_resourcemanager_queue_root_CL, metrics_kafka_CL, and metrics_hmaster_CL. For more information, please see our documentation, Query Azure Monitor logs to monitor HDInsight clusters.

    The log blade in a Log Analytics workspace used to query metrics and logs tables.

    The Logs blade in a Log Analytics workspace lets you query collected metrics and logs across many clusters.

    Azure Monitor alerts

    You can also set up Azure Monitor alerts that will trigger when the value of a metric or the results of a query meet certain conditions. You can condition on a query returning a record with a value that is greater than or less than a certain threshold, or even on the number of results returned by a query. For example, you could create an alert to send an email if a Spark job fails or if a Kafka disk usage becomes over 90 percent full.

    There are several types of actions you can choose to trigger when your alert fires such as an email, SMS, push notification, voice, an Azure Function, an Azure LogicApp, a webhook, an IT service management (ITSM), or an automation runbook. You can set multiple actions for a single alert, and find more information about these different types of actions by visiting our documentation, Create and manage action groups in the Azure Portal.

    Finally, you can specify a severity for the alert in addition to the name. The ability to specify severity is a powerful tool that can be used when creating multiple alerts. For example, you could create an alert to raise a Sev 1 warning alert if a single head node becomes unavailable and another alert that raises a Sev 0 critical alert in the unlikely event that both head nodes go down. Alerts can be grouped by severity when viewed later.

    Apache Ambari

    The Apache Ambari dashboard provides links to several different views for monitoring workloads on your cluster.

    ResourceManager user interface

    The ResourceManager user interface provides several views to monitor jobs on a YARN-based cluster. Here, you can see multiple views, including an overview of finished or running apps and their resource usage, a view of scheduled jobs by queue, and a list of job execution history and the status of each. You can click on an individual application ID to view more details about that job.

    The Applications tab in YARN UI, which shows a list of application execution history for a cluster.

    Spark History Server

    The Apache Spark History Server shows detailed information for completed Spark jobs, allowing for easy monitoring and debugging.  In addition to the traditional tabs across the top (jobs, stages, executors, etc.), you will find additional data, graph, and diagnostic tabs to help with further debugging.

    The pre-made dashboard for Spark we offer as part of HDInsight for monitoring Spark clusters.

    Cluster logs

    YARN log files are available on HDInsight clusters and can be accessed through the ResourceManager logs link in Apache Ambari. For more information about cluster logs, please see our documentation, Manage logs for an HDInsight cluster.

    Next steps

    If you haven’t read the other blogs in this series, you can check them out below:

    About Azure HDInsight

    Azure HDInsight is an easy, cost-effective, enterprise-grade service for open source analytics that enables customers to easily run popular open source frameworks including Apache Hadoop, Spark, Kafka, and others. The service is available in 36 regions and Azure Government and national clouds. Azure HDInsight powers mission-critical applications in a wide variety of sectors and enables a wide range of use cases including extract, transform, and load (ETL), streaming, and interactive querying.

    Building cloud-native applications with Azure and HashiCorp

    $
    0
    0

    With each passing year, more and more developers are building cloud-native applications. As developers build more complex applications they are looking to innovators like Microsoft Azure and HashiCorp to reduce the complexity of building and operating these applications. HashiCorp and Azure have worked together on a myriad of innovations. Examples of this innovation include tools that connect cloud-native applications to legacy infrastructure and tools that secure and automate the continuous deployment of customer applications and infrastructure. Azure is deeply committed to being the best platform for open source software developers like HashiCorp to deliver their tools to their customers in an easy-to use, integrated way. Azure innovation like the managed applications platform that power HashiCorp’s Consul Service on Azure are great examples of this commitment to collaboration and a vibrant open source startup ecosystem. We’re also committed to the development of open standards that help these ecosystems move forward and we’re thrilled to have been able to collaborate with HashiCorp on both the CNAB (Cloud Native Application Bundle) and SMI (Service Mesh Interface) specifications.

    Last year at HashiConf 2018, I had the opportunity to share how we had started to integrate Terraform and Packer into the Azure platform. I’m incredibly excited to get the opportunity to return this year to share how these integrations are progressing and to share a new collaboration on cloud native networking. With this new work we now have collaborations that help customers connect and operate their applications on Azure using HashiCorp technology.

    Connect — HashiCorp Consul Service on Azure

    After containers and Kubernetes, one of the most important innovations in microservices has been the development of the concept of a service mesh. Earlier this year we partnered with HashiCorp and others to announce the release of Service Mesh Interface, a collaborative, implementation agnostic API for the configuration and deployment of service mesh technology. We collaborated with HashiCorp to produce a control rules implementation of the traffic access control (TAC) using Consul Connect. Today we’re excited that Azure customers can take advantage of HashiCorp Consul Services on Azure powered by the Azure Managed Applications platform. HashiCorp Consul provides a solution to simplify and secure service networking and with this new managed offering, our joint customers can focus on the value of Consul while confident that the experts at HashiCorp are taking care of the management of the service. Reducing complexity for customers and enabling them to focus on cloud native innovation.

    Provision — HashiCorp Terraform on Azure

    HashiCorp Terraform is a great tool for doing declarative deployment to Azure. We're seeing great momentum with adoption of HashiCorp Terraform on Azure as the number of customers has doubled since the beginning of the year - customers are using Terraform to automate Azure infrastructure deployment and operation in a variety of scenarios. 

    The momentum is fantastic on the contribution front as well with nearly 180 unique contributors to the Terraform provider for Azure Resource Manager. The involvement from the community with our increased 3-week cadence of releases (currently at version 1.32) ensures more coverage of Azure services by Terraform. Additionally, after customer and community feedback regarding the need for additional Terraform modules for Azure, we've been working hard at adding high quality modules and now have doubled the number of Azure modules in the terraform registry, bringing it to over 120 modules. 

    We believe all these additional integrations enable customers to manage infrastructure as code more easily and simplify managing their cloud environments. Learn more about Terraform on Azure.

    Microsoft and HashiCorp are working together to provide integrated support for Terraform on Azure. Customers using Terraform on Microsoft's Azure cloud are mutual customers, and both companies are united to provide troubleshooting and support services. This joint entitlement process provides collaborative support across companies and platforms while delivering a seamless customer experience. Customers using Terraform Provider for Azure can file support tickets to Microsoft support. Customers using Terraform on Azure support can file support tickets to Microsoft or HashiCorp.

    Deploy — Collaborating on Cloud Native Application Bundles specification

    One of the critical problems solved by containers is the hermetic packaging of a binary into a package that is easy to share and deploy around the world. But a cloud-native application is more than a binary, and this is what led to the co-development, with HashiCorp and others, of the Coud Native Application Bundle (CNAB) specification. CNABs  allow you to package images alongside configuration tools like Terraform and other artifacts to allow a user to seamlessly deploy an application from a single package. I’ve been excited to see the community work together to build the specification to a 1.0 release that shows CNAB is ready for all of the world’s deployment needs. Congratulations to the team on the work and the fantastic partnership.

    If you want to learn more about the ways in which Azure and HashiCorp collaborate to make cloud-native development easier, please check out the links below:

    C++20 Concepts Are Here in Visual Studio 2019 version 16.3

    $
    0
    0

    C++20 Concepts are now supported for the first time in Visual Studio 2019 version 16.3 Preview 2. This includes both the compiler and standard library support.

    First, we’re debuting the feature via /std:c++latest mode and once we have all C++20 features implemented across all Visual Studio products (compiler, library, IntelliSense, build system, debugger, etc.), we’ll provide them through a new /std:c++20 mode. IntelliSense support is not currently available and our implementation doesn’t yet include recent changes in the ISO C++ standards meeting in Cologne. 

    What are C++ Concepts?

    Concepts are predicates that you use to express a generic algorithm’s expectations on its template arguments.

    Concepts allow you to formally document constraints on templates and have the compiler enforce them. As a bonus, you can also take advantage of that enforcement to improve the compile time of your program via concept-based overloading.

    There are many useful resources about Concepts on the Internet. For example, isocpp has many blog posts about Concepts which include one from Bjarne Stroustrup.

    What is supported?

    The compiler support includes: 

    The compiler support doesn’t include recent changes in the ISO C++ standards meeting in Cologne. 

    The library support includes: <concepts>

    Examples

    Here are some examples on how Concepts can help you write more concise code. They also take less time to compile.

    #include <concepts>
    
    // This concept tests whether 'T::type' is a valid type
    template<typename T>
    concept has_type_member = requires { typename T::type; };
    
    struct S1 {};
    struct S2 { using type = int; };
    
    static_assert(!has_type_member<S1>);
    static_assert(has_type_member<S2>);
    
    // Currently, MSVC doesn't support requires-expressions everywhere; they only work in concept definitions and in requires-clauses
    //template <class T> constexpr bool has_type_member_f(T) { return requires{ typename T::type; }; }
    template <class T> constexpr bool has_type_member_f(T) { return has_type_member<T>; }
    
    static_assert(!has_type_member_f(S1{}));
    static_assert(has_type_member_f(S2{}));
    
    // This concept tests whether 'T::value' is a valid expression which can be implicitly converted to bool
    // 'std::convertible_to' is a concept defined in <concepts>
    template<typename T>
    concept has_bool_value_member = requires { { T::value } -> std::convertible_to<bool>; };
    
    struct S3 {};
    struct S4 { static constexpr bool value = true; };
    struct S5 { static constexpr S3 value{}; };
    
    static_assert(!has_bool_value_member<S3>);
    static_assert(has_bool_value_member<S4>);
    static_assert(!has_bool_value_member<S5>);
    
    // The function is only a viable candidate if 'T::value' is a valid expression which can be implicitly converted to bool
    template<has_bool_value_member T>
    bool get_value()
    {
    	return T::value;
    }
    
    // This concept tests whether 't + u' is a valid expression
    template<typename T, typename U>
    concept can_add = requires(T t, U u) { t + u; };
    
    // The function is only a viable candidate if 't + u' is a valid expression
    template<typename T, typename U> requires can_add<T, U>
    auto add(T t, U u)
    {
    	return t + u;
    }

    What about ranges?

    We are also working on ranges. It provides components for dealing with ranges of element and has a tight relationship with Concepts.

    In the meantime, we used the reference implementation range-v3 and cmcstl2 to test the Concepts support and they helped discover many issues. Some are related to the Concepts implementation and some are issues in other feature areas which are exposed by the new coding pattern enabled by Concepts. We fixed all issues in the first category and fixed most of the issues in the second category (the remaining issues are worked around in the source). We now compile and run all the tests in these libraries during our CI (continuous integration).

    The testing also helped expose some source issues in the reference implementation and we reported them to the library owner. 

    Looking for libraries using C++20 features

    Like many other new features we implemented recently, Concepts also use the new parser and semantics analysis actions. While they have a pretty good coverage and we have confidence in the quality, experience shows that we still sometimes see issues especially when people start to adopt new coding patterns enabled by the new features. 

    We are always looking for libraries which have heavy usage of new features. If you have some libraries which use Concepts or other C++20 features, please let us know and we are willing to add them to our daily RWC (real world code) testing. This will help us improve our compiler. 

    Talk to us!

    If you have feedback on the C++20 Concepts support in Visual Studio, we would love to hear from you. We can be reached via the comments below. You can also use the Report a Problem tool in Visual Studio or head over to Visual Studio Developer Community. You can also find us on Twitter @VisualC. 

    The post C++20 Concepts Are Here in Visual Studio 2019 version 16.3 appeared first on C++ Team Blog.

    September patches for Azure DevOps Server and Team Foundation Server

    $
    0
    0

    This month, we are releasing fixes for security vulnerabilities that impact TFS 2015, TFS 2017, TFS 2018, and Azure DevOps Server 2019.

    CVE-2019-1305: cross site scripting (XSS) vulnerability in Repos

    CVE-2019-1306: remote code execution vulnerability in Wiki

    Here are the versions impacted:

    Azure DevOps Server 2019 Update 1 Patch 1

    If you have Azure DevOps Server 2019 Update 1, you should install Azure DevOps Server 2019 Update 1 Patch 1.

    Verifying Installation

    To verify if you have this update installed, you can check the version of the following file: [INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.Framework.Server.dll. Azure DevOps Server 2019 is installed to c:Program FilesAzure DevOps Server 2019 by default.

    After installing Azure DevOps Server 2019.1 Patch 1, the version will be 17.153.29226.8.

    Azure DevOps Server 2019.0.1 Patch 3

    If you have Azure DevOps Server 2019, you should first update to Azure DevOps Server 2019.0.1. Once on 2019.0.1, install Azure DevOps Server 2019.0.1 Patch 3.

    Verifying Installation

    To verify if you have this update installed, you can check the version of the following file: [INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.Framework.Server.dll. Azure DevOps Server 2019 is installed to c:Program FilesAzure DevOps Server 2019 by default.

    After installing Azure DevOps Server 2019.0.1 Patch 3, the version will be 17.143.29226.4.

    TFS 2018 Update 3.2 Patch 7

    If you have TFS 2018 Update 2 or Update 3, you should first update to TFS 2018 Update 3.2. Once on Update 3.2, install TFS 2018 Update 3.2 Patch 7.

    Verifying Installation

    To verify if you have this update installed, you can check the version of the following file: [TFS_INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.WorkItemTracking.Web.dll. TFS 2018 is installed to c:Program FilesMicrosoft Team Foundation Server 2018 by default.

    After installing TFS 2018 Update 3.2 Patch 7, the version will be 16.131.29226.5.

    TFS 2018 Update 1.2 Patch 6

    If you have TFS 2018 RTW or Update 1, you should first update to TFS 2018 Update 1.2. Once on Update 1.2, install TFS 2018 Update 1.2 Patch 6.

    Verifying Installation

    To verify if you have this update installed, you can check the version of the following file: [TFS_INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.Server.WebAccess.Admin.dll. TFS 2018 is installed to c:Program FilesMicrosoft Team Foundation Server 2018 by default.

    After installing TFS 2018 Update 1.2 Patch 6, the version will be 16.122.29226.6.

    TFS 2017 Update 3.1 Patch 8

    If you have TFS 2017, you should first update to TFS 2017 Update 3.1. Once on Update 3.1, install TFS 2017 Update 3.1 Patch 8.

    Verifying Installation

    To verify if you have a patch installed, you can check the version of the following file: [TFS_INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.Server.WebAccess.Admin.dll. TFS 2017 is installed to c:Program FilesMicrosoft Team Foundation Server 15.0 by default.

    After installing TFS 2017 Update 3.1 Patch 8, the version will be 15.117.29226.0.

    TFS 2015 Update 4.2 Patch 3

    If you have TFS 2015, you should first update to TFS 2015 Update 4.2. Once on Update 4.2, install TFS 2015 Update 4.2 Patch 3.

    Verifying Installation

    To verify if you have a patch installed, you can check the version of the following file: [TFS_INSTALL_DIR]Application TierWeb ServicesbinMicrosoft.TeamFoundation.Framework.Server.dll. TFS 2015 is installed to c:Program FilesMicrosoft Team Foundation Server 14.0 by default.

    After installing TFS 2015 Update 4.2 Patch 3, the version will be 14.114.29226.0.

    The post September patches for Azure DevOps Server and Team Foundation Server appeared first on Azure DevOps Blog.

    Viewing all 5971 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>