Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Advancing Microsoft Azure reliability

$
0
0

Reliance on cloud services continues to grow for industries, organizations, and people around the world. So now more than ever it is important that you can trust that the cloud solutions you rely on are secure, compliant with global standards and local regulations, keep data private and protected, and are fundamentally reliable. At Microsoft, we are committed to providing a trusted set of cloud services, giving you the confidence to unlock the potential of the cloud.

Over the past 12 months, Azure has operated core compute services at 99.995 percent average uptime across our global cloud infrastructure. However, at the scale Azure operates, we recognize that uptime alone does not tell the full story. We experienced three unique and significant incidents that impacted customers during this time period, a datacenter outage in the South Central US region in September 2018, Azure Active Directory (Azure AD) Multi-Factor Authentication (MFA) challenges in November 2018, and DNS maintenance issues in May 2019.

Building and operating a global cloud infrastructure of 54 regions made up of hundreds of evolving services is a large and complex task, so we treat each incident as an important learning moment. Outages and other service incidents are a challenge for all public cloud providers, and we continue to improve our understanding of the complex ways in which factors such as operational processes, architectural designs, hardware issues, software flaws, and human factors can align to cause service incidents. All three of the incidents mentioned were the result of multiple failures that only through intricate interactions led to a customer-impacting outage. In response, we are creating better ways to mitigate incidents through steps such as redundancies in our platform, quality assurance throughout our release pipeline, and automation in our processes. The capability of continuous, real-time improvement is one of the great advantages of cloud services, and while we will never eliminate all such risks, we are deeply focused on reducing both the frequency and the impact of service issues while being transparent with our customers, partners, and the broader industry.

Ensuring reliability is a fundamental responsibility for every Azure engineer. To augment these efforts, we have formed a new Quality Engineering team within my CTO office, working alongside our Site Reliability Engineering (SRE) team to pioneer new approaches to deliver an even more reliable platform. To keep improving our reliability, here are some of the initiatives that we already have underway:

  • Safe deployment practices – Azure approaches change automation through a safe deployment practice framework which aims to ensure that all code and configuration changes go through a cycle of specific stages. These stages include dev/test, staging, private previews, a hardware diversity pilot, and longer validation periods before a broader rollout to region pairs. This has dramatically reduced the risk that software changes will have negative impacts, and we are extending this mechanism to include software-defined infrastructure changes, such as networking and DNS.
  • Storage-account level failover – During the September 2018 datacenter outage, several storage stamps were physically damaged, requiring their immediate shut down. Because it is our policy to prioritize data retention over time-to-restore, we chose to endure a longer outage to ensure that we could restore all customer data successfully. A number of you have told us that you want more flexibility to make this decision for your own organizations, so we are empowering customers by previewing the ability to initiate your own failover at the storage-account level.
  • Expanding availability zones – Today, we have availability zones live in the 10 largest Azure regions, providing an additional reliability option for the majority of our customers. We are also underway to bring availability zones to the next 10 largest Azure regions between now and 2021.
  • Project Tardigrade – At Build last month, I discussed Project Tardigrade, a new Azure service named after the nearly indestructible microscopic animals also known as water bears. This effort will detect hardware failures or memory leaks that can lead to operating system crashes just before they occur, so that Azure can then freeze virtual machines for a few seconds so the workloads can be moved to a healthy host.  
  • Low to zero impactful maintenance – We’re investing in improving zero-impact and low-impact update technologies including hot patching, live migration, and in-place migration. We’ve deployed dozens of security and reliability patches to host infrastructure in the past year, many of which were implemented with no customer impact or downtime. We continue to invest in these technologies to bring their benefits to even more Azure services.
  • Fault injection and stress testing – Validating that systems will perform as designed in the face of failures is possible only by subjecting them to those failures. We’re increasingly fault injecting our services before they go to production, both at a small scale with service-specific load stress and failures, but also at regional and AZ scale with full region and AZ failure drills in our private canary regions. Our plan is to eventually make these fault injection services available to customers so that they can perform the same validation on their own applications and services.

Look for us to share more details of our internal architecture and operations in the future. While we are taking all of these steps to improve foundational reliability, Azure also provides you with high availability, disaster recovery, and backup solutions that can enable your applications to meet business availability requirements and recovery objectives. We maintain detailed guidance on designing reliable applications, including best practices for architectural design, monitoring application health, and responding to failures and disasters.

Reliability is and continues to be a core tenet of our trusted cloud commitments, alongside compliance, security, privacy, and transparency. Across all these areas, we know that customer trust is earned and must be maintained, not just by saying the right thing but by doing the right thing. Microsoft believes that a trusted, responsible and inclusive cloud is grounded in how we engage as a business, develop our technology, our advocacy and outreach, and how we are serving the communities in which we operate. Microsoft is committed to providing a trusted set of cloud services, giving you the confidence to unlock the potential of the cloud.


Enhancing the customer experience with the Azure Networking MSP partner program

$
0
0

We are always looking for ways to improve the customer experience and allow our partners to complement our offerings. In support of these efforts we are sharing the Azure Networking Managed Service Provider (MSP) program along with partners that deliver value added managed cloud network services to help enterprise customers connect, operationalize, and scale their mission critical applications running in Azure.

Azure Networking MSP Partner Program enables partners such as networking focused MSPs, network carriers, and systems integrators (SIs) to use their rich networking experience to offer cloud and hybrid networking services around Azure’s growing portfolio of Azure Networking products and services.

Azure’s Networking services are fundamental building blocks critical to cloud migration, optimal connectivity, and security of applications. New networking services such as Virtual WAN, ExpressRoute, Azure Firewall, and Azure Front Door further enrich this portfolio allowing customers to deploy richer applications in the cloud. The Networking MSP partners can help customers deploy and manage Azure Networking services.

Azure Networking MSPs

Azure MSPs play a critical role in enterprise cloud transformation by bringing their deep knowledge and real-world experience to help enterprise customers migrate to Azure. Azure MSPs and the Azure Expert MSP program make it easy for customers to discover and engage specialized MSPs.

Azure Networking MSPs are a specialized set of MSPs for addressing enterprise cloud networking needs and challenges across all aspects of cloud and hybrid networking. Their managed network services and offerings include various aspects of the application lifecycle including network architecture, planning, deployment, operations, maintenance, and optimization.

Azure Lighthouse - unblocking Azure Networking MSPs

Many enterprise customers, such as banks and financial institutions want partners who can help them with managing their Azure Networking subscriptions. However, the need for individual customer management for these subscriptions introduces a lot of manual work for these service providers.

Last week, we announced Azure Lighthouse, which is a unique set of capabilities on Azure, empowering service provider partners with a single control plane to view and manage Azure at scale across all their customers with higher automation and efficiency. We also talked about how Azure Lighthouse enables management at scale for service providers.

With Azure Lighthouse, Azure Networking MSPs can seamlessly onboard customers via managed services offers on the Azure marketplace or natively via ARM templates – empowering them to deliver a rich set of managed network experiences for their end-customers.

An image showing how customers[4]

Azure Networking MSP partners

Azure Networking partners play a big role in the Azure networking ecosystem, delivering Virtual WAN CPEs and hybrid networking services such as ExpressRoute to enterprises that are building cloud infrastructures. We welcome the following Azure Networking MSP launch partners into our Azure Networking MSP partner ecosystem.

An image showing how the Azure Networking MSP partner program will enable new business models and markets for Azure networking._thumb

An image showing the logos for our partners including: Tata Communications, Aryaka, InterCloud, Megaport, British Telecommunications, Internet Initiative Japan, Nippon Telegraph and Telephone Corporation (NTT), Equinix

These partners have invested in people, best practices, operations and tools to build and harness deep Azure Networking knowledge and service capabilities. They’ve trained their staff on Azure and have partnered closely with us in Azure Networking through technical workshops and design reviews.

These partners are also early adopters of Azure Lighthouse, building and delivering a new generation of managed network experiences for their end customers. We encourage all worldwide networking MSPs, network carriers, and SIs that would like to join this program to reach out via ManagedVirtualWAN@microsoft.com to join the Azure Networking MSP program and bring your unique value and services to Azure customers.

In summary, we firmly believe that Azure customers will greatly benefit from the new cloud networking focused services our partners are bringing to the market. Customers will be able to leverage these services to augment their own inhouse skills and be able to move faster and more efficiently while optimally leveraging the cloud to meet their enterprise business needs. For more information on how to engage with our Networking MSP partner, please see partner information on our MSP partners site.

Introducing proximity placement groups

$
0
0

Co-locate your Azure resources for improved application performance

The performance of your applications is central to the success of your IT organization. Application performance can directly impact your ability to increase customer satisfaction and ultimately grow your business.

Many factors can affect the performance of your applications. One of those is network latency which is impacted, among other things, by the physical distance between the virtual machines deployed.

For example, when you place your Microsoft Azure Virtual Machines in a single Azure region, the physical distance between the virtual machines is reduced. Placing them within a single availability zone is another step you can take to deploy your virtual machines closer to each other. However, as the Azure footprint grows, a single availability zone may span multiple physical data centers resulting in network latency that can impact your overall application performance. If a region does not support availability zones or if your application does not use availability zones, the latency between the application tiers may increase as a result.

Today, we are announcing the preview of proximity placement groups. A new capability that we are making available to achieve co-location of your Azure Infrastructure as a Service (IaaS) resources and low network latency among them.

Azure proximity placement groups represent a new logical grouping capability for your Azure Virtual Machines, which in turn is used as a deployment constraint when selecting where to place your virtual machines. In fact, when you assign your virtual machines to a proximity placement group, the virtual machines are placed in the same data center, resulting in lower and deterministic latency for your applications.

When to use proximity placement groups

Proximity placement groups improve the overall application performance by reducing the network latency among virtual machines. You should consider using proximity placement groups for multi-tiered, IaaS-based deployments where application tiers are deployed using multiple virtual machines, availability sets and/or virtual machine scale sets.

As an example, consider the case where each tier in your application is deployed in an availability set or virtual machine scale set for high availability. Using a single proximity placement group for all the tiers of your applications, even if they use different virtual machine SKUs and sizes, will force all the deployments to follow each other and land in the same data center for best latency.

In order to get the best results with proximity placement groups, make sure you’re using accelerated networking and optimize your virtual machines for low latency.

Getting started with proximity placement groups

The easiest way to start with proximity placement groups is to use them with your Azure Resource Manager (ARM) templates.

To create a proximity placement group resource just add the following statement:

{
"apiVersion": "2018-04-01",
"type": "Microsoft.Compute/proximityPlacementGroups",
"name": "[parameters('ppgName')]",
"location": "[resourceGroup().location]"
}

To use this proximity placement group later in the template with a virtual machine (or availability set or virtual machine scale set), just add the following dependency and property:

{
   "name": "[parameters('virtualMachineName')]",
   "type": "Microsoft.Compute/virtualMachines",
   "apiVersion": "2018-06-01",
   "location": "[parameters('location')]",
   "dependsOn": [
     "[concat('Microsoft.Compute/proximityPlacementGroups/', parameters('ppgName'))]"
   ],
   "properties": {
     "proximityPlacementGroup": {
       "id": "[resourceId('Microsoft.Compute/proximityPlacementGroups',parameters('ppgName'))]"
   }
}

To learn more about proximity placement groups, see the following tutorials on using proximity placement groups with PowerShell and CLI.

What to expect when using proximity placement groups

Proximity placement groups offer co-location in the same data center. However, because proximity placement groups represent an additional deployment constraint, allocation failures can occur (for example, you may not be able to place your Azure Virtual Machines in the same proximity placement group.)

When you ask for the first virtual machine in the proximity placement group, the data center is automatically selected. In some cases, a second request for a different virtual machine SKU may fail since it does not exist in the data center already selected. In this case, an OverconstrainedAllocationRequest error will be returned. To troubleshoot, please check to see which virtual machines are available in the chosen region or zone using the Azure portal or APIs. If all of the desired SKUs are available, try changing the order in which you deploy them.

In the case of elastic deployments, which scale out, having a proximity placement group constraint on your deployment may result in a failure to satisfy the request. When using proximity placement groups, we recommend that you ask for all the virtual machines at the same time.

Proximity placement groups are in preview now and are offered free of charge in all public regions.

Please refer to our documentation for additional information about proximity placement groups.

Here’s what we’ve heard from SAP, who participated in the early preview program:

“It is really great to see this feature now publicly available. We are going to make use of it in our standard deployments. My team is automating large scale deployments of SAP landscapes. To ensure best performance of the systems it is essential to ensure low-latency between the different components of the system. Especially critical is the communication between Application server and the database, as well as the latency between HANA VMs when synchronous replication has to be enabled. In the late 2018 we did some measurements in various Azure regions and found out that sometimes the latency was not as expected and not in the optimal range. While discussing this with Microsoft, we were offered to join the early preview and evaluate the Proximity Placement Groups (PPG) feature. During our evaluation we were able to bring down the latency to less than 0.3 ms between all system components, which is more than sufficient to ensure great system performance. Best deterministic results we achieved when PPGs were combined with Network acceleration of VM NICs, which additionally improved the measured latencies.”

Ventsislav Ivanov, Development Architect, SAP

Digital distribution centers—The future is here

$
0
0

The pace of change has never been as fast as it is now. Globally, the population is becoming more urban and income levels are rising. By 2050, nearly 70 percent of the global population will live in cities or urban areas—that’s six billion people. Consumer behavior has also materially changed over the last decade, and omnichannel retail, personalization, and demand for same day deliveries are growing. To cater to the changing landscape, urban distribution centers that stage products closer to users within large cities are on the rise to enable faster delivery and greater customization.

Within the four walls of the distribution center, picking and packing tasks account for more than 50 percent of the total labor cost of warehousing operations. Access to labor has become increasingly challenging, particularly in urban centers, and staffing levels shoot up five to ten-times normal levels during the holiday season. Space constraints and difficulty in staffing are pushing companies to look at adopting distribution center technologies that cut labor costs, optimizes the flow of products, and improves productivity and utilization of these centers.

Since announcing Microsoft’s $5B commitment to developing an industry leading internet of things (IoT) platform last year, we’ve continued to work with our ecosystem partners to build solutions to address such problems. In “Our IoT Vision and Roadmap” session at Microsoft Build, we announced a partnership with Lenovo and NVIDIA, to bring advanced artificial intelligence (AI) to Azure IoT Edge. The demonstrated solution showed Lenovo hardware, a single SE350 Edge Server, running the Azure IoT Edge runtime with NVIDIA DeepStream to process multiple channels of 1080P/30FPS H265 video streams in real-time, transforming cameras into smart sensors that understand their physical environments and use vision algorithms to find missing products on a shelf or detect damaged goods. Such applications of Azure IoT Edge technology enable customers to quickly and cost effectively deploy retail solutions that optimize their logistics operations.

Today, we are excited to announce the next milestone on this journey, the preview of the Lenovo’s Digital Distribution Center (DDC) solution. Lenovo’s DDC is an IoT solution developed in collaboration with NVIDIA and Microsoft. Through real-time scalable package detection, tracking, and validation, DDC delivers for better optimization and increased utilization of distribution centers for retail, manufacturing, and logistics operations. The solution uses multi-video stream analytics with artificial intelligence and machine learning inferencing to self-learn, optimize, and scale. Additional releases will include geofencing alerts, palletization, depalletization, and last-mile sorting.

Start your supply chain transformation with the Digital Distribution Center. Automate redundant, manual processes, increase employee productivity and safety, and maximize distribution center effectiveness

Start your supply chain transformation with the Digital Distribution Center. Automate redundant, manual processes, increase employee productivity and safety, and maximize distribution center effectiveness

DDC is built with Azure IoT Central, Microsoft’s fully managed IoT app platform that makes it easy to connect, monitor, and manage your IoT devices and products. Azure IoT Central simplifies the initial setup of your IoT solution and reduces the management burden, operational costs, and overhead of a typical IoT project. This allows solution builders to apply their energy and unique domain expertise to solving customer needs and creating business value, rather than needing to tackle the operating, managing, securing, and scaling of a global IoT solution. Partners like Lenovo and NVIDIA add unique value through schemas that are relevant to industry solutions like DDC, including common industry hierarchies that organize people, places, and environments.

Join us for a demo of our solution at the Microsoft partner booth during Microsoft Inspire—July 14-18, 2019, in Las Vegas, Nevada. If you are interested in joining our preview program about the solution, please contact IoTSolutions@lenovo.com  

Participate in the Developer Economics Survey

$
0
0

Developer Economics Survey Live Now Banner.

The Developer Economics Q2 2019 survey is here in its 17th edition to shed light on the future of the software industry. Every year more than 40,000 developers around the world participate in this survey, so this is a chance to be part of something big, voice your thoughts, and make your own contribution to the developer community.

Is this survey for me?

The survey is for all developers, whether you’re a professional, a hobbyist, or a student; building front-end, back-end, or full stack; working on desktop, web, gaming, cloud, mobile, IoT, AR/VR, machine learning, or data science.

What questions am I likely to be asked?

The survey asks questions about the status and the future of the software industry.

  • What’s going up and what’s going down in the software industry?
  • Which are your favorite tools and platforms?
  • Which programs are doing the best job supporting developers like you?

What’s in it for me?

There are some perks to go with your participation. Have a look at what you can get our hands on:

  • A chance to win awesome prizes like a Microsoft Surface Pro 6.
  • A free State of the Developer Nation report with the key findings (available September 2019).

What’s in it for Microsoft?

This is an independent survey from SlashData, an analyst firm in the developer economy that tracks global software developer trends. We’re interested in seeing the report that comes from this survey, and we want to ensure the broadest developer audience participates.

Of course, any data collected by this survey is between you and SlashData. You should review their Terms & Conditions page to learn more about the awarding of prizes, their data privacy policy, and how SlashData will handle your personal data.

Ready to go?

The survey is open until July 28, 2019.

Take the survey today

The survey is available in English, Chinese, Spanish, Portuguese, Vietnamese, Russian, Japanese, or Korean.

The post Participate in the Developer Economics Survey appeared first on Windows Developer Blog.

Resolve code issues in live apps running in Azure Kubernetes Services with the Snapshot Debugger

$
0
0

With ASP.NET core, my life as a Windows-first developer just broadened dramatically. I can now develop apps without being tied to a single platform. Having spent most of my career as a “Windows only” developer, I am now taking on the task of redesigning a twenty-year-old, IIS based service, so that it could be built on a Mac and hosted on Linux in Azure.

Given these new hosting possibilities one of my pressing concerns was how little I knew about the Linux world in general and, more specifically, what tools and techniques I could use when debugging complex issues in production. Thankfully for me, Visual Studio 2019 Enterprise has expanded support to include Azure Kubernetes Services (AKS) managing Linux containers.

Debug without interruption in AKS

Why Snapshot Debugger? Diagnosing production issues in the cloud can be difficult and time consuming because you may be dealing with issues that only occur at scale or in specific environments, and your favorite debugging tools are often unavailable. Remote debugging your production site is rarely an option because it is serving live traffic, and any action that stops the web process would immediately impact your customers.

Snapshot Debugger is built for production and works at cloud scale. If you are familiar with breakpoints and tracepoints in VS for debugging local code, then Snappoints and Logpoints are similar but for debugging against apps running in production (without stopping execution). You are able to attach to these snapshots within the familiar and intuitive environment of Visual Studio and analyze specific lines of code using your Locals, Watches and Call Stack windows.

Prepping your Dockerfiles for Snapshot Debugger

Setting up Snapshot Debugger to work with AKS Linux Docker containers requires the following Dockerfile prerequisites:

  • Install Snapshot Debugger prerequisites (libxml2, libuuid, libunwind, bash, unzip).
  • Install Visual Studio Snapshot Debugger components.
  • Set environment variables to load Visual Studio Snapshot Debugger into your .NET Core applications.
  • Install Visual Studio Debugger components.

We have provided a repo containing a set of Dockerfiles that demonstrate the setup on Docker images for several Linux variants. Each file includes the latest supported Snapshot Debugger backend package and sets the environment variables to load the debugger into your .NET Core application.

Linux variant .NET Core
Debian 9 (Stretch) 2.2 Dockerfile (amd64)
Alpine 3.8 2.2 Dockerfile (amd64)
Ubuntu (Bionic) 2.2 Dockerfile (amd64)

 

Setup Snapshot Debugger for AKS in your ASP.NET Core project

You can use the following instructions to provide Snapshot Debugger support your ASP.NET Core web apps:

  • In Visual Studio 2019, configure your Web application to use Docker
    1. Right-click on the web application Add->Docker Support
    2. Target OS: Linux

Add Docker Support to Visual Studio

  • Select a version of Linux to use from the Visual Studio Snapshot Debugger Docker image repository and merge it with the Dockerfile created for your web application. An example of a merged Dockerfile can be found here.
  • In Visual Studio, rebuild your web solution to create a new docker image.
  • Tag your local image. In the following example I rename “dasblog-core” to “poppastring/dasblog-core”:
    1. docker tag dasblog-core poppastring/dasblog-core
  • Push the newly tagged image to your image repository (using docker in this example).
    1. docker push poppastring/dasblog-core
  • Deploy your application to AKS. More details on deploying to AKS with Cloud Shell and Azure CLI  can be found here. Additionally this sample YAML file sets up a basic deployment and service scenario.
  • Choose Debug > Attach Snapshot Debugger and select the Azure Kubernetes Service your project is deployed to along with an Azure storage account and click Attach. You are now in snapshot debugger mode!
  • In the code editor, click the left gutter to create a Snappoint and click Start Collection to deploy your Snappoint to Azure.

Set a Snappoint

  • Navigate to your service to execute the code represented by the Snappoint.
  • Once your Snapshot has been collected, in Visual Studio click on View Snapshot to view your Locals, Watches and Call Stack windows at that moment in time.

Check out this video where I use Snapshot Debugger to uncover a flaw in my open source code hosted in Azure. You can also find additional details on Debugging live Azure Kubernetes Services here.

Try it out!

We are excited to be able to offer first class production diagnostics support with Visual Studio 2019 Enterprise and Snapshot Debugger. Snapshot Debugging is now supported in Azure Kubernetes Services as well as Azure App Services, Azure Virtual Machines and Azure Virtual Machine scale sets. We encourage you to try out the new capabilities in your solution and we look forward to hearing your feedback and how we can make this feature better.

We’re also working towards adding support for ASP.NET Core 3 and additional Linux versions in the near future, so be sure to follow our Dockerfile repo for updates.

 

 

The post Resolve code issues in live apps running in Azure Kubernetes Services with the Snapshot Debugger appeared first on The Visual Studio Blog.

New capabilities in Stream Analytics reduce development time for big data apps

$
0
0

Azure Stream Analytics is a fully managed PaaS offering that enables real-time analytics and complex event processing on fast moving data streams. Thanks to zero-code integration with over 15 Azure services, developers and data engineers can easily build complex pipelines for hot-path analytics within a few minutes. Today, at Inspire, we are announcing various new innovations in Stream Analytics that help further reduce time to value for solutions that are powered by real-time insights. These are as follows:

Bringing the power of real-time insights to Azure Event Hubs customers

Today, we are announcing one-click integration with Event Hubs. Available as a public preview feature, this allows an Event Hubs customer to visualize incoming data and start to write a Stream Analytics query with one click from the Event Hub portal. Once the query is ready, they will be able to operationalize it in few clicks and start deriving real time insights. This will significantly reduce the time and cost to develop real-time analytics solutions.

GIF showing the one-click integration between Event Hubs and Azure Stream Analytics

One-click integration between Event Hubs and Azure Stream Analytics

Augmenting streaming data with SQL reference data support

Reference data is a static or slow changing dataset used to augment real-time data streams to deliver more contextual insights. An example scenario would be currency exchange rates regularly updated to reflect market trends, and then converting a stream of billing events in different currencies to a common currency of choice.

Now generally available (GA), this feature provides out-of-the-box support for Azure SQL Database as reference data input. This includes the ability to automatically refresh your reference dataset periodically. Also, to preserve the performance of your Stream Analytics job, we provide the option to fetch incremental changes from your Azure SQL Database by writing a delta query. Finally, Stream Analytics leverages versioning of reference data to augment streaming data with the reference data that was valid at the time the event was generated. This ensures repeatability of results.

New analytics functions for stream processing

  • Pattern matching:

      With the new MATCH_RECOGNIZE function, you can easily define event patterns using regular expressions and aggregate methods to verify and extract values from the match. This enables you to easily express and run complex event processing (CEP) on your streams of data. For example, this function will enable users to easily author a query to detect “head and shoulder” patterns on the on a stock market feed.

      • Use of analytics function as aggregate:

          You can now use aggregates such as SUM, COUNT, AVG, MIN, and MAX directly with the OVER clause, without having to define a window. Analytics functions as Aggregates enables users to easily express queries such as “Is the latest temperature greater than the maximum temperature reported in the last 24 hours?”

          Egress to Azure Data Lake Storage Gen2

          Azure Stream Analytics is a central component within the Big Data analytics pipelines of Azure customers. While Stream Analytics focuses on the real-time or hot-path analytics, services like Azure Data Lake help enable batch processing and advanced machine learning. Azure Data Lake Storage Gen2 takes core capabilities from Azure Data Lake Storage Gen1 such as a Hadoop compatible file system, Azure Active Directory, and POSIX based ACLs and integrates them into Azure Blob Storage. This combination enables best in class analytics performance along with storage tiering and data lifecycle management capabilities and the fundamental availability, security, and durability capabilities of Azure Storage.

          Azure Stream Analytics now offers native zero-code integration with Azure Data Lake Storage Gen2 output (preview.)

          Enhancements to blob output

          • Native support for Apache parquet format:

              Native support for egress in Apache parquet format into Azure Blob Storage is now generally available. Parquet is a columnar format enabling efficient big data processing. By outputting data in parquet format into a blob store or a data lake, you can take advantage of Azure Stream Analytics to power large scale streaming extract, transfer, and load (ETL), to run batch processing, to train machine learning algorithms, or to run interactive queries on your historical data. We are now announcing general availability of this feature for egress to Azure Blob Storage.

              • Managed identities (formerly MSI) authentication:

                  Azure Stream Analytics now offers full support for Managed Identity based authentication with Azure Blob Storage on the output side. Customers can continue to use the connection string based authentication model. This feature is available as a public preview.

                  Many of these features just started rolling out worldwide and will be available in all regions within several weeks.

                  Feedback

                  The Azure Stream Analytics team is highly committed to listening to your feedback and letting the user voice influence our future investments. We welcome you to join the conversation and make your voice heard via our UserVoice page.

                  The next version of Microsoft Edge: Enterprise evaluation and roadmap

                  $
                  0
                  0

                  This week at the Microsoft Inspire 2019 conference, we are sharing an update on capabilities that we are investing in to make the next version of Microsoft Edge the best browser for enterprises and business customers of all sizes.

                  The Dev Channel now has enterprise features enabled by default and is ready for evaluation and supported by detailed deployment and configuration documentation. We are also offering full support for deployment in pilot and production environments through our commercial support channels.

                  Dev channel builds, including offline installers and ADMX files, are available at https://www.microsoftedgeinsider.com/enterprise. We’re excited to hear from you about how these enterprise-focused features work in your environment and improve end user productivity.

                  Looking forward

                  In the rest of this post, we’ll share the updates we are covering at Inspire, and outline our goals and roadmap for Microsoft Edge for enterprises and business customers.

                  Fundamentals first

                  To make the next version of Microsoft Edge a great browsing experience for enterprise and business customers, we begin with the fundamentals: compatible with the modern web, fully supported across platforms, and kept secure, up-to-date, and consistent across devices. Our adoption of open source software, announced last December, coupled with a complete re-building of our engineering, deployment, and update systems, is enabling us to deliver on these commitments.

                  You may have already seen our new updating system in action with our Canary and Dev Channels. The preview builds deliver daily and weekly builds to all devices automatically, across all supported Windows platforms and Mac OS. These desktop platforms join the Microsoft Edge for iOS and Android platforms, (which have rich support for enterprise management with Microsoft Intune).

                  Internet Explorer mode

                  One of the features available for evaluation is Internet Explorer mode, a feature that integrates IE11 natively into Microsoft Edge. Internet Explorer mode allows users to navigate seamlessly from a modern web application to one that requires legacy HTML or plugins. You’ll no longer need a “two-browser” solution.

                  We know that most of our customers are using IE11 in their environments. One thing that our customers made clear to us is that their web apps that rely on IE11 tend to be critical to many of their business processes. The apps work well and don’t change, which allows customers to focus their IT resources on other problem areas. Any solution we provide would need to just work with their sites.

                  The team designed Internet Explorer mode to meet that need, with a goal of 100% compatibility with sites that work today in IE11, including full support for IE’s doc modes, as well as ActiveX controls, like Silverlight and Browser Helper Objects (BHOs). In addition, Internet Explorer mode appears visually like it’s just a part of the next Microsoft Edge, providing users with the latest UI features, like a smarter address bar and new tab page, and greater privacy controls for the modern web.

                  By leveraging the Enterprise mode site list that many customers have already built and deployed to support the two-browser solution, IT professionals can enable users of the next Microsoft Edge to simply navigate to IE11-dependent sites and they will just work. Navigating back to a modern site will be seamless. No need for a separate window or tab.

                  For more background on Internet Explorer mode, please check out this video discussing Microsoft Edge enterprise compatibility.

                  Simple to deploy and manage

                  Another goal of ours is to make Microsoft Edge the easiest browser deployment decision customers have ever made. This is true particularly if you have existing investments in Microsoft 365 and Microsoft technologies, but we are also deeply committed to making sure that Microsoft Edge works well with first- and third-party management tools.

                  The next version of Microsoft Edge supports a range of Group Policies, allowing customers to configure every aspect of the deployment and product experience. We will also support Mobile Device Management (MDM) deployments on Windows 10 (via Microsoft Intune or third-party products), as well as popular deployment and management tools on Mac OS.

                  Customers will be able to control the flow of updates, either by leveraging our general updating mechanisms and using policies to pause updates at a particular version while testing compatibility with a small set of pilot users, or by using the provided offline installers (MSIs and PKGs) to push updates directly to their managed devices on their own schedule.

                  For those customers using System Center Configuration Manager (SCCM) or Microsoft Intune, we’re working to make the deployment and configuration experience as easy as possible. We will also work with third parties, ensuring that deploying and configuring Microsoft Edge is a great experience with those tools as well.

                  Keeping customers and data protected

                  Customers tell us that their users spend 60% or more of their time on a desktop or laptop PC in a browser, making the security of the browser critical to the integrity of the organizational environment and data.

                  In addition to fundamental security features that are derived from Chromium (e.g. sandboxing and site isolation), our teams are working with the Chromium Security teams to help improve the core security of all Chromium-based browsers on Windows.

                  We’re also engineering our update systems to ensure that we can respond to vulnerabilities and get fixes out to customers as quickly as possible.

                  The current version of Microsoft Edge has a number of security innovations that we intend to bring forward to the next version of Microsoft Edge. This includes integrating our industry-leading Microsoft Defender SmartScreen technology into the next browser on all our supported platforms, in order to help protect users from phishing, malware, and scams.

                  We’re also bringing forward some of the enterprise-class security innovations that we pioneered in our existing version of Microsoft Edge, including:

                  • Application Guard on Windows 10, a Hyper-V based technology that isolates general internet browsing into a container to protect the corporate network from exploits
                  • Azure AD Conditional Access to help organizations keep their users productive while controlling access to corporate sites
                  • Microsoft Information Protection to help organizations manage what users can do with the data they access through the browser

                  More productive at work

                  We’ve heard from administrators and individuals within organizations that we have an opportunity with Microsoft Edge to make daily activities easier and empower people to get more done.

                  Balancing compliance and access to information shouldn’t be a tradeoff. Microsoft Edge natively supports signing into the browser with Azure Active Directory (AAD) work or school accounts. This means users’ favorites and other browser data can be synced securely between devices, including Windows, macOS, iOS and Android devices, while respecting your organization’s compliance requirements. Also, once signed-in to the browser, Single Sign-on ensures that access to corporate sites will just work.

                  Searching for information is, of course, one of the top activities that people do in a browser, and perhaps the need is even higher for finding information within the corporate network. By combining the next Microsoft Edge with the power of the Microsoft Graph, we’re investing to bring your organization’s information to your users’ fingertips.

                  Every time a user opens a new tab or starts a new task, they see the new tab page (NTP). In the next version of Microsoft Edge, an Enterprise-focused NTP will be available to empower people with fast access to what they need. Users will see the corporate web apps, documents, and sites they use most, as well as recommended content from Office 365. Whether it’s highlighting the document they were collaborating on with a colleague or making them aware of important company-wide communications, the NTP dynamically brings information that’s relevant to each person.

                  To find internal information, what could be simpler than using the search box to find what you’re looking for? We’re infusing Microsoft Edge with native support for Microsoft Search in Bing for Microsoft 365 customers. Microsoft Search in Bing integration makes the Edge search box a one-stop shop for results from the web and from the corporate network, using Microsoft AI to extract the most relevant and useful information from the network. Administrators can even customize the suggestions and results for their specific environment.

                  Our commitment to online privacy protections

                  At Build 2019, we shared our commitment to offering greater transparency and control over your online data and highlighted one specific feature we’re working on: tracking prevention. Tracking prevention is designed to protect you from being tracked by websites that you aren’t accessing directly. Whenever a website is visited, trackers from other sites may save information in the browser using cookies and other storage mechanisms. This information may include the sites you’ve visited and the content you’re interested in, building a digital profile which can be accessed by organizations to offer personalized content when visiting other sites.

                  Tracking prevention is still being tested, but this feature, as well as all other privacy tools we introduce for consumer, will be available to our enterprise customers – both IT administrators and end users.

                  Open to your feedback

                  We’re excited to get these features into your hands to start hearing your feedback.

                  Many of the features described in the roadmap are available today in our Insider channels. Some start rolling out on our servers today and will slowly roll out over the next couple weeks. Others are still in development and will come in later updates. We believe that with today’s announcement, the enterprise feature set is complete enough for most companies to start evaluations and pilots.

                  Here is a breakdown of what features are available today, rolling out soon or coming in the future:

                  Roadmap chart showing Enterprise features available and coming soon (see www.microsoftedgeinsider.com/enterprise for details)

                  These features represent only the beginning of our commitment to making Microsoft Edge the best browser for your business across platforms, especially if you have invested in Microsoft 365.

                  We eagerly await your comments via the feedback tool in the new Microsoft Edge Preview builds, post in the Microsoft Edge Insider forums, or, if you are a Microsoft customer, start a conversation with your account teams today.

                  Sean Lyndersay, Group Program Manager
                  Colleen Williams, Senior Program Manager

                  The post The next version of Microsoft Edge: Enterprise evaluation and roadmap appeared first on Microsoft Edge Blog.


                  Assess the readiness of SQL Server data estates migrating to Azure SQL Database

                  $
                  0
                  0

                  Migrating hundreds of SQL Server instances and thousands of databases to Azure SQL Database, our Platform as a Service (PaaS) offering, is a considerable task, and to streamline the process as much as possible, you need to feel confident about your relative readiness for migration. Being able to identify low-hanging fruit including the servers and databases that are fully ready or that require minimal effort to prepare for migration eases and accelerates your efforts. We are pleased to share that Azure database target readiness recommendations have been enabled.

                  Azure Migrate - Databases

                  The Azure Migrate hub provides a unified view of all your migrations across the servers, applications, and databases. This integration provides customers with a seamless migration experience beginning during the discovery phase. The functionality allows customers to use assessment tools for visibility into the applications currently run on-premises so that they can determine cloud suitability and project the cost of running their applications in the cloud. It also allows customers to compare options between competing public and hybrid cloud options.

                  Assessing and viewing results

                  Assessing the overall readiness of your data estate for a migration to Azure SQL Database requires only a few steps:

                  1. Provision an instance of Azure Migrate, create a migration project, and then add Data Migration Assistant to the migration solution to perform the assessment.
                  2. After you create the migration project, download Data Migration Assistant and run an assessment against one or more SQL Server instances.
                  3. Upload the Data Migration Assistant assessment results to the Azure Migrate hub.

                  In a few minutes, the Azure SQL Database target readiness results will be available in your Azure Migrate project.

                  You can use single assessment for as many SQL Servers as you want, or you can run multiple parallel assessments and upload them to the Azure Migrate hub. The Azure Migrate hub consolidates all the assessments and a provide summarized view of SQL Server and database readiness.

                  The Azure Migrate dashboard provides a view of your data estate and its overall readiness for migration. This includes the number of databases that are ready to migrate to Azure SQL Database and to SQL Server hosted on an Azure virtual machine. Readiness is computed based on feature parity and schema compatibility with various Azure SQL Database offerings. The dashboard also provides insight into overall migration blockers and the all-up effort involved with migrating to Azure.

                  IT pros and database administrators can drill-down further to view a specific set of SQL Server instances and databases for a better understanding their readiness for migration.

                  Assessed instances

                  The “Assessed database” view provides an overview of individual databases, showing info like migration blockers and readiness for Azure SQL Database and SQL Servers hosted on an Azure virtual machine.

                  Assessed databases

                  Get started

                  Migrations can be overwhelming and a bit daunting, but we’re here with the expertise and tools, like Data Migration Assistant, to support you along the way. Discover your readiness results and acceleration your migration.

                  Get started:

                  Microsoft makes it easier to build popular language representation model BERT at large scale

                  $
                  0
                  0

                  This post is co-authored by Rangan Majumder, Group Program Manager, Bing and Maxim Lukiyanovm, Principal Program Manager, Azure Machine Learning.

                  Today we are announcing the open sourcing of our recipe to pre-train BERT (Bidirectional Encoder Representations from Transformers) built by the Bing team, including code that works on Azure Machine Learning, so that customers can unlock the power of training custom versions of BERT-large models using their own data. This will enable developers and data scientists to build their own general-purpose language representation beyond BERT.

                  The area of natural language processing has seen an incredible amount of innovation over the past few years with one of the most recent being BERT. BERT, a language representation created by Google AI language research, made significant advancements in the ability to capture the intricacies of language and improved the state of the art for many natural language applications, such as text classification, extraction, and question answering. The creation of this new language representation enables developers and data scientists to use BERT as a stepping-stone to solve specialized language tasks and get much better results than when building natural language processing systems from scratch.

                  The broad applicability of BERT means that most developers and data scientists are able to use a pre-trained variant of BERT rather than building a new version from the ground up with new data. While this is a reasonable solution if the domain’s data is similar to the original model’s data, it will not deliver best-in-class accuracy when crossing over to a new problem space. For example, training a model for the analysis of medical notes requires a deep understanding of the medical domain, providing career recommendations depend on insights from a large corpus of text about jobs and candidates, and legal document processing requires training on legal domain data. In these cases, to maximize the accuracy of the Natural Language Processing (NLP) algorithms one needs to go beyond fine-tuning to pre-training the BERT model.

                  Additionally, to advance language representation beyond BERT’s accuracy, users will need to change the model architecture, training data, cost function, tasks, and optimization routines. All these changes need to be explored at large parameter and training data sizes. In the case of BERT-large, this can be quite substantial as it has 340 million parameters and trained over 2.5 billion Wikipedia and 800 million BookCorpus words. To support this with Graphical Processing Units (GPUs), the most common hardware used to train deep learning-based NLP models, machine learning engineers will need distributed training support to train these large models. However, due to the complexity and fragility of configuring these distributed environments, even expert tweaking can end up with inferior results from the trained models.

                  To address these issues, Microsoft is open sourcing a first of a kind, end-to-end recipe for training custom versions of BERT-large models on Azure. Overall this is a stable, predictable recipe that converges to a good optimum for developers and data scientists to try explorations on their own.

                  “Fine-tuning BERT was really helpful to improve the quality of various tasks important for Bing search relevance,” says Rangan Majumder, Group Program Manager at Bing, who led the open sourcing of this work.  “But there were some tasks where the underlying data was different from the original corpus BERT was pre-trained on, and we wanted to experiment with modifying the tasks and model architecture.  In order to enable these explorations, our team of scientists and researchers worked hard to solve how to pre-train BERT on GPUs. We could then build improved representations leading to significantly better accuracy on our internal tasks over BERT.  We are excited to open source the work we did at Bing to empower the community to replicate our experiences and extend it in new directions that meet their needs.”

                  “To get the training to converge to the same quality as the original BERT release on GPUs was non-trivial,” says Saurabh Tiwary, Applied Science Manager at Bing.  “To pre-train BERT we need massive computation and memory, which means we had to distribute the computation across multiple GPUs. However, doing that in a cost effective and efficient way with predictable behaviors in terms of convergence and quality of the final resulting model was quite challenging. We’re releasing the work that we did to simplify the distributed training process so others can benefit from our efforts.”

                  Results

                  To test the code, we trained BERT-large model on a standard dataset and reproduced the results of the original paper on a set of GLUE tasks, as shown in Table 1. To give you estimate of the compute required, in our case we ran training on Azure ML cluster of 8xND40_v2 nodes (64 NVidia V100 GPUs total) for 6 days to reach listed accuracy in the table. The actual numbers you will see will vary based on your dataset and your choice of BERT model checkpoint to use for the upstream tasks.

                   GLUE Test results, evaluated by the provided test script on the GLUE development set.

                  Table1. GLUE Test results, evaluated by the provided test script on the GLUE development set. The “Average” column is simple average over the table results. F1 scores are reported for QQP and MRPC, Spearman correlations are reported for STS-B, and accuracy scores are reported for the other tasks. The results for tasks with smaller dataset sizes have significant variation and may require multiple fine-tuning runs to reproduce the results.

                  The code is available in open source on the Azure Machine Learning BERT GitHub repo. Included in the repo is:

                  • A PyTorch implementation of the BERT model from Hugging Face repo.
                  • Raw and pre-processed English Wikipedia dataset.
                  • Data preparation scripts.
                  • Implementation of optimization techniques such as gradient accumulation and mixed precision.
                  • An Azure Machine Learning service Jupyter notebook to launch pre-training of the model.
                  • A set of pre-trained models that can be used in fine-tuning experiments.
                  • Example code with a notebook to perform fine-tuning experiments.

                  With a simple “Run All” command, developers and data scientists can train their own BERT model using the provided Jupyter notebook in Azure Machine Learning service. The code, data, scripts, and tooling can also run in any other training environment.

                  Summary

                  We could not have achieved these results without leveraging the amazing work of the researchers before us, and we hope that the community can take our work and go even further. If you have any questions or feedback, please head over to our GitHub repo and let us know how we can make it better.

                  Learn how Azure Machine Learning can help you streamline the building, training, and deployment of machine learning models. Start free today.

                  New transit options to help get you there – Real Time Updates, Trip Frequency, and Alternate Routes

                  $
                  0
                  0

                  Over the last six months, the Bing Maps team has been hard at work to improve the quality of mass transit routing. Here are our three biggest improvements, which you can try out today on Bing.com and Bing.com/maps.

                  Real Time Updates

                  Real time transit trip updates provide information on if your bus or train is delayed, running on time, or even early. Our Bing Maps transit coverage consists of over 250 agencies in 9 different countries including well over 100 in the United States alone.

                  Trip Frequency

                  Trip frequency provides users with information on how often a bus or train runs – super useful for when you need to time your departure to a transit stop. For example, the Bus 560 is scheduled to depart every 30 minutes.

                  Alternate Routes

                  Finally, we have improved the logic behind alternate transit route selection and now provide alternate transit options on Bing.com as part of the Directions Answer experience. For example, if there is only one bus route that will get you from A to B, the alternate transit route options will simply be the next departure times for that bus.

                  Below is a screenshot that highlights each of the new transit options now available:

                  Bing Maps Transit Improvements

                  Let us know what you think about these latest improvements. You can connect with the team at the Bing Maps Forums to share feedback and let us know what you would like to see next.

                  Simon Shaprio
                  Bing Maps Program Manager
                  https://www.linkedin.com/in/simonshapiro/

                  Announcing ML.NET 1.2 and Model Builder updates (Machine Learning for .NET)

                  $
                  0
                  0

                  We are excited to announce ML.NET 1.2 and updates to Model Builder and the CLI. ML.NET is an open-source and cross-platform machine learning framework for .NET developers. ML.NET also includes Model Builder (a simple UI tool for Visual Studio) and the ML.NET CLI (Command-line interface) to make it super easy to build custom Machine Learning (ML) models using Automated Machine Learning (AutoML).

                  Using ML.NET, developers can leverage their existing tools and skill-sets to develop and infuse custom ML into their applications by creating custom machine learning models for common scenarios like Sentiment Analysis, Price Prediction, Image Classification and more!

                  The following are some of the key highlights in this update:

                  ML.NET Updates

                  ML.NET 1.2 is a backwards compatible release with no breaking changes so please update to get the latest changes.

                  General availability of TimeSeries support for forecasting and anomaly detection

                  Developers can use the Microsoft.ML.TimeSeries package for many scenarios such as: detecting spikes and changes in product sales using an anomaly detection model or creating sales forecasts which could be affected by seasonality and other time related context.

                  Learn more through these samples.

                  General availability of ML.NET packages to use TensorFlow and ONNX models

                  ML.NET has been designed as an extensible platform so that you can consume other popular ML models such as TensorFlow and ONNX models and have access to even more machine learning and deep learning scenarios, like image classification, object detection, and more.

                  Learn more through these code samples for Microsoft.ML.OnnxTransformer and Microsoft.ML.TensorFlow and the end-to-end ML.NET computer vision sample apps:

                  Easily integrate ML.NET models in web or serverless apps with Microsoft.Extensions.ML integration package (preview)

                  This package makes it easier to integrate loading ML.NET model for scoring in ASP.NET apps, Azure Functions and web services. Specifically, the package allows a developer to use Microsoft.Extensions.ML for loading the ML.NET model using Dependency Injection, and optimizing the model’s execution and performance in multi-threaded environments such as ASP.NET Core apps. Learn more here.

                  MLNet CLI updated to 0.14 (preview)

                  You can use the ML.NET CLI to automatically generate an ML.NET model and underlying C# code. You can run the ML.NET CLI on any command-prompt (Windows, Mac, or Linux).

                  You simply need to provide your own dataset and select the machine learning task you want to implement (such as classification or regression), and the CLI uses the AutoML engine to create model generation and deployment source code, as well as the binary model.

                  The CLI is updated to 0.14 addressing customer feedback.

                  Learn more about the CLI here:

                  Model Builder updates

                  ML.NET Model Builder provides an easy to understand visual interface to build, train, and deploy custom machine learning models. (The updated Model Builder version will be available sometime today, July 17th 2019).

                  Expanding support to .txt files and more delimiters for values

                  Users can now use .txt files for training the model. In the initial previews, Model Builder supported only .csv and .tsv files. Values can be separated by the following delimiters: space, comma, tab and semicolon.

                  No limits on training data size!

                  Based on popular request, we have removed the 1GB limit on the training data size. Developers can now upload files of any size.

                  Smart defaults for training time for large datasets

                  The default training time is now set based on the size of your data. It was 10 seconds before. This will allow Model Builder to find at least 1 model within that time.

                  Learn more about how long you should train for?

                  Improve model consumption experience

                  In the code generation step at the end of the model building process, Model Builder now also adds the ML.NET 1.2 NuGet package as well as a reference to the class library project to your project. This makes it much easier to consume the model in your application.

                  Update to ML.NET 1.2

                  Model Builder uses the latest version of ML.NET and the generated code will reference 1.2. In the earlier previews It was using ML.NET 1.0.

                  Customer feedback addressed

                  There were many issues fixed in this release. Learn more in the release notes.

                  Need help to go to production – Fill out this form!

                  If you are using ML.NET in your app and looking to go into production, you can talk to an engineer on the ML.NET team.

                  Try ML.NET and Model Builder today!

                  Summary

                  We are excited to release these updates for you and we look forward to seeing what you will build with ML.NET. If you have any questions or feedback, you can ask them here for ML.NET and Model Builder.

                  Your friends @ML.NET

                  The post Announcing ML.NET 1.2 and Model Builder updates (Machine Learning for .NET) appeared first on .NET Blog.

                  DragonFruit and System.CommandLine is a new way to think about .NET Console apps

                  $
                  0
                  0

                  There's some interesting stuff quietly happening in the "Console App" world within open source .NET Core right now. Within the https://github.com/dotnet/command-line-api repository are three packages:

                  • System.CommandLine.Experimental
                  • System.CommandLine.DragonFruit
                  • System.CommandLine.Rendering

                  These are interesting experiments and directions that are exploring how to make Console apps easier to write, more compelling, and more useful.

                  The one I am the most infatuated with is DragonFruit.

                  Historically Console apps in classic C look like this:

                  #include <stdio.h>
                  

                  int main(int argc, char *argv[])
                  {
                  printf("Hello, World!n");
                  return 0;
                  }

                  That first argument argc is the count of the number of arguments you've passed in, and argv is an array of pointers to 'strings,' essentially. The actual parsing of the command line arguments and the semantic meaning of the args you've decided on are totally on you.

                  C# has done it this way, since always.

                  static void Main(string[] args)
                  
                  {
                  Console.WriteLine("Hello World!");
                  }

                  It's a pretty straight conceptual port from C to C#, right? It's an array of strings. Argc is gone because you can just args.Length.

                  If you want to make an app that does a bunch of different stuff, you've got a lot of string parsing before you get to DO the actual stuff you're app is supposed to do. In my experience, a simple console app with real proper command line arg validation can end up with half the code parsing crap and half doing stuff.

                  myapp.com someCommand --param:value --verbose

                  The larger question - one that DragonFruit tries to answer - is why doesn't .NET do the boring stuff for you in an easy and idiomatic way?

                  From their docs, what if you could declare a strongly-typed Main method? This was the question that led to the creation of the experimental app model called "DragonFruit", which allows you to create an entry point with multiple parameters of various types and using default values, like this:

                  static void Main(int intOption = 42, bool boolOption = false, FileInfo fileOption = null)
                  {
                      Console.WriteLine($"The value of intOption is: {intOption}");
                      Console.WriteLine($"The value of boolOption is: {boolOption}");
                      Console.WriteLine($"The value of fileOption is: {fileOption?.FullName ?? "null"}");
                  }

                  In this concept, the Main method - the entry point - is an interface that can be used to infer options and apply defaults.

                  using System;
                  

                  namespace DragonFruit
                  {
                  class Program
                  {
                  /// <summary>
                  /// DragonFruit simple example program
                  /// </summary>
                  /// <param name="verbose">Show verbose output</param>
                  /// <param name="flavor">Which flavor to use</param>
                  /// <param name="count">How many smoothies?</param>
                  static int Main(
                  bool verbose,
                  string flavor = "chocolate",
                  int count = 1)
                  {
                  if (verbose)
                  {
                  Console.WriteLine("Running in verbose mode");
                  }
                  Console.WriteLine($"Creating {count} banana {(count == 1 ? "smoothie" : "smoothies")} with {flavor}");
                  return 0;
                  }
                  }
                  }

                  I can run it like this:

                  > dotnet run --flavor Vanilla --count 3   
                  
                  Creating 3 banana smoothies with Vanilla

                  The way DragonFruit does this is super clever. During the build process, DragonFruit changes this public strongly typed Main to a private (so it's not seen from the outside - .NET won't consider it an entry point. It's then replaced with a Main like this, but you'll never see it as it's in the compiled/generated artifact.

                  public static async Task<int> Main(string[] args)
                  
                  {
                  return await CommandLine.ExecuteAssemblyAsync(typeof(AutoGeneratedProgram).Assembly, args, "");
                  }

                  So DragonFruit has swapped your Main for its smarter Main and the magic happens! You'll even get free auto-generated help!

                  DragonFruit:
                  
                  DragonFruit simple example program

                  Usage:
                  DragonFruit [options]

                  Options:
                  --verbose Show verbose output
                  --flavor <flavor> Which flavor to use
                  --count <count> How many smoothies?
                  --version Display version information

                  If you want less magic and more power, you can use the same APIs DragonFruit uses to make very sophisticated behaviors. Check out the Wiki and Repository for more and perhaps get involved in this open source project!

                  I really like this idea and I'd love to see it taken further! Have you used DragonFruit on a project? Or are you using another command line argument parser?


                  Sponsor: Ossum unifies agile planning, version control, and continuous integration into a smart platform that saves 3x the time and effort so your team can focus on building their next great product. Sign up free.



                  © 2019 Scott Hanselman. All rights reserved.
                       

                  Azure Marketplace new offers – Volume 41

                  $
                  0
                  0

                  We continue to expand the Azure Marketplace ecosystem. For this volume, 109 new offers successfully met the onboarding criteria and went live. See details of the new offers below:

                  Applications

                  Active Directory Domain Controller 2019

                  Active Directory Domain Controller 2019: This virtual machine comes pre-loaded with the Active Directory Domain Services role, DNS server role, remote administration tools for Active Directory, DNS, and the required PowerShell modules.

                  ADQTM-aThingz Data Quality Tracking and Management

                  ADQTM-aThingz Data Quality Tracking and Management: Improve operational efficiency, optimize cost, gain productivity, and eliminate recurring problems in your data with ADQTM’s seamless pre-built dashboards, KPIs, data models, machine learning, and cognitive features.

                  Anqlave Data Vault

                  Anqlave Data Vault: Anqlave Data Vault is a secure and scalable key management system that leverages the Intel Software Guard Extensions technology on Azure and allows users to run and manage its Hardware Security Module on Azure.

                  ANS GLASS Cloud Management Portal

                  ANS GLASS Cloud Management Portal: Managed Azure with ANS GLASS delivers business-critical cloud support to ensure you realize the full business value of Azure while keeping your business operationally agile and efficient in the cloud.

                  ArcBlock ABT Blockchain Node Taiwan

                  ArcBlock ABT Blockchain Node Taiwan: ArcBlock ABT Blockchain Node is fully decentralized and uses ArcBlock's blockchain development platform to easily build, run, and use DApps and blockchain-ready services.

                  ArcSight ArcMC 290

                  ArcSight ArcMC 2.90: ArcSight Management Center (ArcMC) is a centralized security management system that manages large deployments of ArcSight solutions such as ArcSight Logger, SmartConnectors, FlexConnectors, and Connector Appliance through a single interface.

                  Array vAPV ADC for Azure

                  Array vAPV ADC for Azure: Array Networks vAPV is an easy-to-use, secure, high capacity application delivery controller that integrates with the Azure cloud environment while maintaining feature parity across physical, virtual, and cloud computing environments.

                  Attivo Networks ThreatDefend Deception

                  Attivo Networks ThreatDefend Deception: The Attivo ThreatDefend Deception Platform provides a comprehensive, customer-proven platform for proactive security and threat detection in user networks, data centers, clouds, and a variety of specialized attack surfaces.

                  Azure Blockchain Service Explorer

                  Azure Blockchain Service Explorer: Azure Blockchain Service Explorer provides a rich interface for interpreting data on your ledger, with detailed views of tokens, contracts, accounts, transactions, and blocks.

                  Blue Ocean Note

                  Blue Ocean Note: Blue Ocean Note is a Software-as-a-Service care record management system for welfare facilities and nursery schools. This application is available only in Japanese.

                  BrightSkool

                  BrightSkool: BrightSkool is designed to help schools manage challenges in a single unified solution. It is a reliable, affordable, web-based platform with a proven record of increased productivity and efficiency.

                  Cloud Edition for Lustre Client

                  Cloud Edition for Lustre Client: Cloud Edition for Lustre Client is a scalable, parallel file system purpose-built for high performance computing (HPC) and ideally suited for dynamic, pay-as-you-go applications from rapid simulation and prototyping to peak HPC workloads.

                  Cloud Security Center

                  Cloud Security Center: Ensure your cloud environments are equipped with the latest Microsoft 365 E5 Security features, yielding protection for user and administrator identities including all devices, applications, and data. This application is available only in Dutch.

                  Cloud Velocity

                  Cloud Velocity: Cloud Velocity offers an “easy button” for monitoring Office 365. Devices are placed in critical locations, escalation protocols are defined, and then the outcomes simply happen: automated phone calls, emails, texts, alerts, and more.

                  CloudBees Jenkins Distribution

                  CloudBees Jenkins Distribution: CloudBees Jenkins Distribution provides development teams with a highly dependable, secure Jenkins environment curated from the most recent supported Jenkins release. The distribution comes with a recommended catalog of tested plugins.

                  Corda Enterprise Virtual Machine

                  Corda Enterprise Virtual Machine: With Corda Enterprise Virtual Machine deployed on Microsoft Azure, developers can quickly and easily deploy nodes on a long-lived Corda network using pre-made cloud templates.

                  CrateDB Cloud

                  CrateDB Cloud: CrateDB Cloud is a scalable SQL cloud service hosted on Azure and operated 24/7 by Crate.io. It is ideal for industrial time series data processing and other IoT and machine data analytic workloads.

                  DataFabric for Azure

                  DataFabric for Azure: DataFabric automatically creates a live graph of stateful objects to represent real-world data sources, such as sensors, devices, and systems, and then dynamically interlinks these objects to maintain concurrency through a secure mesh of connections.

                  Datavard Glue

                  Datavard Glue: Datavard Glue seamlessly integrates your SAP landscape with big data applications running on Hadoop. Adapt big data technologies with no extra effort and leverage your SAP experience for big data operations.

                  DMX ETL

                  DMX ETL: DMX enables data transformation to extract, transfer, and load data from multiple sources to on-premises SQL or Azure targets. DMX ETL is an easy-to-configure transformation tool that does not require coding or tuning.

                  DMX-H ETL

                  DMX-H ETL: Syncsort’s DMX-H offers a single software environment for accessing and integrating all your enterprise data sources – both batch and streaming – while managing, governing, and securing the entire process.

                  eCourt

                  eCourt: eCourt provides information and communication technology enablement for courts. It features an approval system at every stage in the court process, case approval notification, decentralized blockchain record storage, and more.

                  Ericom Conect VDI

                  Ericom Connect VDI: Ericom Connect VDI provides virtual desktops running Windows 10 on Azure. The solution implements multifactor authentication and provides single sign-on, clientless access, and more. This application is available only in Japanese.

                  GAPTEQ Low-Code-Platform

                  GAPTEQ | Low-Code-Platform: GAPTEQ is a professional front end for your SQL database. Use GAPTEQ to connect to a Microsoft SQL Server database or MySQL and then use its metadata, tables, logic, and data.

                  hd backup 365

                  hd backup 365: Azure Backup replaces your existing on-premises or off-site backup solution with a cloud-based solution that is reliable, secure, and cost-competitive. This application is available only in Korean.

                  Human Risks

                  Human Risks: Human Risks is an online platform that enables you to manage the entire enterprise security risk management process from risk assessments to incident response.

                  Hyperproof Trial

                  Hyperproof Trial: Hyperproof is targeted at technical teams, compliance managers, and auditors considering new compliance programs or looking to make existing programs more efficient. Hyperproof makes it easy to manage all your day-to-day compliance tasks.

                  IncrediBuild Demo

                  IncrediBuild Demo: Easily accelerate a Visual Studio sample with IncrediBuild or upload your code to gain exceptional build performance. This instance includes a pre-installed IncrediBuild Coordinator and Agent with Visual Studio Community and a Visual Studio sample project.

                  Infosys Analytics Workbench

                  Infosys Analytics Workbench: Infosys Analytics Workbench provides leading capabilities for data discovery, analytical modeling, model management, visualization, and self-service model consumption to deliver end-to-end self-service capabilities.

                  Intellicus BI Server V18.1 (10 Users - Linux)

                  Intellicus BI Server V18.1 (10 Users - Linux): Intellicus BI Server on Microsoft Azure is an end-to-end self-service BI platform that offers advanced reporting and analytics capabilities, a semantic layer, and integrated ETL capabilities.

                  Intellicus BI Server V18.1 (10 Users - Linux)

                  Intellicus BI Server V18.1 (5 Users - Linux): Intellicus BI Server on Microsoft Azure is an end-to-end self-service BI platform that offers advanced reporting and analytics capabilities, a semantic layer, and integrated ETL capabilities.

                  Johnson Controls Digital Vault

                  Johnson Controls Digital Vault: Johnson Controls Digital Vault is a flexible, scalable platform that reaches across silos to gather data from disparate sources, store it securely, and standardize it. It then converts that data into something you can leverage to gain new insights.

                  Kamstrup Analytics - District Analyser

                  Kamstrup Analytics – District Analyser: Kamstrup’s analytics platform for water utilities comprises two systems – Water Intelligence and Incidents – to help you effectively go from imagining “What if” to knowing “How to.”

                  KEYRUS - Chatbot CODY - Smart Assistant Integrado

                  KEYRUS - Chatbot CODY - Smart Assistant Integrado: The CODY Smart Assistant solution with Microsoft LUIS offers an intelligent conversation platform that allows quick access to results and KPI placements for sales and operations. This application is available only in Portuguese.

                  Lenses-io

                  Lenses.io: Lenses is an innovative DataOps platform providing SQL access and processing on streaming data. Lenses on Azure is optimized for both Azure HDInsight and your own Kafka clusters to streamline the configuration.

                  M365 Workplace Cloud Storage Easy Intune Storage

                  M365 Workplace Cloud Storage | Easy Intune Storage: Microsoft cloud-managed devices get relevant policies and configurations from Microsoft Intune, with some settings relying on files available by URL. This application manages these files with an easy, web-based approach.

                  Managed Detection and Response for Azure

                  Managed Detection and Response for Azure: Protect your Azure deployment with Paladion’s comprehensive Managed Detection and Response service that leverages next-generation AI to defend your Azure deployment at every stage of a threat’s lifecycle.

                  ManageEngine Mobile Device Manager Plus MSP

                  ManageEngine Mobile Device Manager Plus MSP: Mobile Device Manager Plus MSP is mobile device management software that features device enrollment, app management, profile management, security management, and more.

                  MinIO Client Container Image

                  MinIO Client Container Image: MinIO Client is a Golang command line interface tool that offers alternatives for ls, cp, mkdir, diff, and rsync commands for file systems and object storage systems.

                  MinIO Container Image

                  MinIO Container Image: MinIO is an object storage server that is compatible with cloud storage services and is mainly used for storing unstructured data such as photos, videos, and log files.

                  movingimage Secure Enterprise Video Platform

                  movingimage Secure Enterprise Video Platform: This Azure-based platform offers a smooth, secure video streaming experience for large companies across different verticals – including 26 of 30 DAX-listed companies.

                  OpenCities Digital Workplace Intranet

                  OpenCities DigitalWorkplace Intranet: Empower your city employees with tools that elevate communication and collaboration. OpenCities DigitalWorkplace is a powerful cloud-based intranet that can streamline processes to save your city time and money.

                  OpenCities Web CMS

                  OpenCities Web CMS: OpenCities makes it easier for cities to transform their websites into digital government platforms. OpenCities’ user-tested templates deliver beautiful and functional sites that allow staff to create content and online services that engage citizens.

                  OPSAI-COM

                  OPSAI.COM: The OPSAI platform delivers deep insight into your IT estate, allowing IT and business users to have a common view of systems and processes. Ensure a secure, compliant IT infrastructure and automate your operations.

                  Phish Hunter

                  Phish Hunter: Phish Hunter offers an automated solution to phishing and identity compromise. The solution simplifies the process of detecting and remediating phishing incidents, eliminating the risk of compromised credentials.

                  photographic asset inspection

                  Photographic Asset Inspection: Photographic Asset Inspection is for infrastructure owners responsible for maintaining and operating concrete surface infrastructure such as bridges.

                  PI3

                  PI3: PI3 is an Azure-based card payments analytics and reporting platform for financial institutions. The PI3 platform enables businesses connected to card transactions (credit or debit) to gain insights from their data.

                  PiXYZ Studio

                  PiXYZ Studio: PiXYZ Studio prepares and transforms 3D CAD data into 3D assets that are ready to use in real-time experiences for various business purposes, including design, marketing, and training.

                  Precision Campus Analytics

                  Precision Campus Analytics: Precision Campus provides an online query tool and dashboard system for your college or university. Enable colleagues to explore enrollment, retention rates, course success rates, and other metrics you choose.

                  Publico24 spzoo

                  Publico24 sp. z o.o.: Publico24 is a digital press newsstand that offers news stories in HTML as a service that can help boost customer satisfaction. This application is available only in Polish.

                  PyTorch Container Image

                  PyTorch Container Image: PyTorch is a deep learning platform that accelerates the transition from research prototyping to production deployment. This Bitnami image includes Torchvision for specific computer vision support.

                  Real-World Audiences and Triggers for Dynamics

                  Real-World Audiences and Triggers for Dynamics: Neura enables you to attribute events such as session starts, app opens, push engagement attempts, and in-app features to real-world user behavior, uncovering actionable insights for campaign optimization.

                  Rhapsody Golf

                  Rhapsody Golf: Get comprehensive golf course management with Rhapsody Golf. The Front Office module handles bag drop, locker assignment, flight and caddy, and tournaments and scoring while the Membership module seamlessly handles privileges, statements, and renewals.

                  Rhapsody Hospitality Management System

                  Rhapsody Hospitality Management System: The comprehensive Rhapsody Hospitality Management System handles financial consolidation, centralized purchasing, and group-level business intelligence while helping manage multiple properties.

                  RStudio Connect

                  RStudio Connect: Publish R and Python data products in one IT-managed and monitored location with flexible security policies to bring the power of data science to your enterprise.

                  SD-Internet

                  SD-INTERNET: SD-INTERNET helps small and midsize enterprises accelerate their digital transformation journey by enhancing Azure cloud application experiences through Adaptiv Networks' high-performance Network-as-a-Service platform.

                  Service Sheeft

                  Service Sheeft: Developed on Microsoft technologies to be used as Software-as-a-Service on Azure, Service Sheeft is a ticketing system focused on improving communication and collaboration when solving end user support issues.

                  Smart Cursors Marketplace

                  SmartCursors Marketplace: SmartCursors is a marketplace of integrated cloud applications for managing, driving, and transforming every aspect of business.

                  Smetric Business Intelligence Service

                  Smetric Business Intelligence Service: Smetric’s business intelligence tools automatically extract, analyze, and present data from various sources in beautiful, customized dashboards. The visual formats are easy to read, easy to share, and accessible from anywhere, anytime.

                  Solucion Neurona

                  Solucion Neurona: Designed for financial institutions, Neurona works as a transactional switch specialized in managing electronic money transactions (mass payments, electronic collections, and funds transfers). This application is available only in Spanish.

                  SwaggerHub Cloud

                  SwaggerHub Cloud: Create a single source of truth for OpenAPI definitions with SwaggerHub's API design platform. Collaborate on changes and new development, define and enforce standards across your API catalog, and integrate seamlessly with other API lifecycle solutions.

                  SysTrack Digital Experience Monitoring

                  SysTrack Digital Experience Monitoring: SysTrack is an experience monitoring solution that gathers data on what affects your users and their productivity in the digital workplace – including CPU, RAM, application resource use, and over 10,000 other data points.

                  Teradata Data Mover

                  Teradata Data Mover: Teradata Data Mover is a powerful data movement tool that intelligently chooses the fastest method to copy data and database objects between databases.

                  Teradata Data Mover (IntelliSphere)

                  Teradata Data Mover (IntelliSphere): Teradata Data Mover is a powerful data movement tool that intelligently chooses the fastest method to copy data and database objects between databases.

                  Teradata Ecosystem Manager

                  Teradata Ecosystem Manager: Teradata Ecosystem Manager provides an end-to-end approach to meeting application SLAs through monitoring, administration, and control of data warehouse environments to let you more effectively manage your deployment.

                  Teradata Query Service

                  Teradata Query Service: Teradata Query Service provides application developers a simplified, modern interface to connect to data from a web page or application.

                  Teradata QueryGrid Manager (IntelliSphere)

                  Teradata QueryGrid Manager (IntelliSphere): Teradata QueryGrid Manager (IntelliSphere) provides federated query capability that allows users to access and query data in remote servers that are part of the Teradata QueryGrid data fabric.

                  Teradata Vantage with IntelliSphere

                  Teradata Vantage with IntelliSphere: Teradata Vantage is Teradata's flagship analytics platform that provides a fast path to secure, scalable, high-performance analytics for tackling your most complex business challenges.

                  Teradata Viewpoint (IntelliSphere)

                  Teradata Viewpoint (IntelliSphere): Teradata Viewpoint (IntelliSphere) is an advanced web-based management portal for up to 10 Teradata systems whether in the cloud or on-premises. Entitlement comes from a paid Teradata Vantage with IntelliSphere subscription.

                  UiPath Robot

                  UiPath Robot: This solution template delivers provisioning of UiPath robots including automatic connection to your UiPath Orchestrator for secure scheduling, management, and control of your enterprise-wide digital workforce.

                  Unscrambl Answers

                  Unscrambl Answers: Unscrambl Answers has been trained with domain-specific knowledge about your business, understands your data, and has deeply embedded machine learning algorithms that help you discover and present relevant insights in natural language.

                  Voyado

                  Voyado: The powerful, user-friendly Voyado loyalty system helps strengthen your customer relations and uses data to increase sales, cut costs, and reach maximum profitability.

                  VSBLTY VisionCaptor

                  VSBLTY VisionCaptor: The VisionCaptor content management system provides a wide variety of capabilities for bringing proximity-aware, interactive brand messaging to life on any digital screen or platform.

                  VULCAN

                  VULCAN: VULCAN analyzes your trainees’ performance in real time while they execute an exercise, providing you with the information needed to increase efficiency and flexibility.

                  WebFOCUS BUE 8201m

                  WebFOCUS BUE 8201m: WebFOCUS BUE is for business users and analysts who would like to generate and share reports, charts, dashboards, and in-document analytics to conduct data discovery and explore data for trends, patterns, and opportunities.

                  Consulting services

                  Azure 4 Week Briefing, Assessment, and POC Offer

                  Azure 4 Week Briefing, Assessment, and POC Offer: Pyramid Consulting Solutions offers an innovative Azure consulting service in three phases: on-site kickoff briefing, 30-day assessment, and proof of concept for migration of an initial workload to Azure.

                  Azure Backup & Restore 4-Wk POC

                  Azure Backup & Restore: 4-Wk POC: Using Azure for backup and data protection presents an opportunity to address a number of risk and compliance objectives. Test-drive Azure Backup for four weeks and protect up to five workloads in Azure with this offer from Foundation IT.

                  Azure Cloud Assessment with Rackspace 2-Wk

                  Azure Cloud Assessment with Rackspace: 2-Wk: Rackspace consultants will assess your application estate and infrastructure platform to set a strategy for moving workloads to the cloud. They will also provide a report to determine a roadmap and high-level Azure design.

                  Azure Data Warehouse & Data Lake 2 Hr Assessment

                  Azure Data Warehouse & Data Lake: 2-Hr Assessment: Neudesic will review your Azure logical data warehouse and Azure Data Lake requirements, then explain how a repeatable approach can deliver opportunities in predictive and prescriptive analytics in your environment.

                  Azure DevOps Jumpstart 1-Week Implementation

                  Azure DevOps Jumpstart: 1-Week Implementation: Wintellect's one-week consulting offer jump-starts your dev-ops move to the cloud. Build, test, automate, and deploy applications more efficiently while reducing costs and increasing team productivity.

                  Azure Foundations Service 10-day Implementation

                  Azure Foundations Service: 10-day Implementation: Transparity Solutions will help you lead your Azure journey with governance and security. Learn how to architect key components such as networking integration, identity, network security, compute, and storage.

                  Azure Migration Planning Free 4 Hour Workshop

                  Azure Migration Planning Free 4 Hour Workshop: SystemsUp offers a free four-hour workshop to discuss whether your existing compute environment could be successfully migrated to Azure, resulting in a statement of work or proposal for work to deliver the engagement.

                  Azure Migration 1-day Assessment

                  Azure Migration: 1-day Assessment: Atmosera offers a customer-proven assessment practice that ensures a match of your needs with the optimal cloud solution, delivering a clear roadmap with options to make informed decisions on Azure migration.

                  Azure Migration 3-week Assessment

                  Azure Migration: 3-week Assessment: TCS offers a detailed, three-week cloud suitability assessment of up to 20 business applications along with associated infrastructure. An outcome report covers deployment model, cost benefit analysis, and migration plan.

                  Cloud Backup 2-Day Assessment

                  Cloud Backup: 2-Day Assessment: Find out the ROI of moving to cloud backup and disaster recovery with this assessment by Insight, which compiles and clarifies the data you need to make well-informed decisions that will affect your organization’s operational resiliency.

                  CloudForte Consulting for Azure

                  CloudForte Consulting for Azure: Unisys CloudForte for Azure is a comprehensive and customizable managed services offering that addresses the most critical and trickiest cloud adoption challenges, especially around compliance and security.

                  CSP Migration 2 Week Free Rapid Migration

                  CSP Migration: 2 Week Free Rapid Migration: Hanu Software offers a no-cost assessment and migration for existing Azure customers to Hanu's Cloud Service Provider (CSP). Hanu will provide a migration roadmap and recommendations on cost and performance optimizations.

                  CTA for Azure Migration 2-Wk Assessment

                  CTA for Azure Migration: 2-Wk Assessment: This two-week datacenter-to-Azure migration assessment by Silicus is focused on helping enterprises with cloud adoption strategy, roadmap planning, current-state assessment, and solution gap analysis.

                  Datacom Enabling Services 3-Wk Implementation

                  Datacom Enabling Services: 3-Wk Implementation: Standardize, automate, and securely deliver business-grade cloud applications with help from Datacom, an Azure Expert MSP. Customers benefit from the scale, security, and expert skillsets available via Datacom Enabling Services.

                  DevOps Practices and Platform 1 Day Assessment

                  DevOps Practices and Platform: 1 Day Assessment: This professional dev-ops service from CloudOps supports current Azure users (or customers looking to get started with Azure) who care about speed to market and modernizing their applications and infrastructure practices.

                  Digital Media Assessment 2-Hr Assessment

                  Digital Media Assessment: 2-Hr Assessment: Globant will assess your digital media strategy and content delivery requirements and will provide a recommendation on how Azure can deliver your media to multiple endpoints for accelerated business impact.

                  Azure Backup & Restore 4-Wk POC

                  Disaster Recovery trial in Azure: 4-Wk POC: This trial lets you test how Azure can protect workloads. Foundation IT will set up and test Azure Site Recovery on your behalf and simulate a DR test so that you have a complete view of how a managed cloud DR service works.

                  Domino & Notes App Modernization 1-Day Workshop

                  Domino & Notes App Modernization: 1-Day Workshop: This workshop from Binary Tree will detail the technical, business, and end user considerations and options available to pursue a Notes/Domino retirement or retention program.

                  DRaaS on Azure - 1 Week Assessment

                  DRaaS on Azure - 1 Week Assessment: Cloud4C, a Microsoft CSP Gold partner, can help assess and execute your disaster recovery plan. Azure-certified architects will create a roadmap to understand, define, and plan an optimal DR strategy for your organization.

                  Encrypted Briefcase 1-Hour Briefing

                  Encrypted Briefcase: 1-Hour Briefing: Communication Square offers a briefing on best practices to implement its Encrypted Briefcase solution, which provides data protection, access tracking, and permissions revocation for Word, Excel, PowerPoint, and PDF files.

                  Encrypted Briefcase 1-Wk Assessment

                  Encrypted Briefcase: 1-Wk Assessment: This assessment from Communication Square analyzes your file storage and document collaboration platforms and then explains how to deploy Encrypted Briefcase to track file access, revoke permissions, and restore older data.

                  Encrypted Briefcase 2 Weeks PoC

                  Encrypted Briefcase: 2 Weeks PoC: Communication Square offers this proof of concept for Encrypted Briefcase, which provides data protection for Word, Excel, PowerPoint, and PDF files. Experts will set up, provision, and provide admin and onboarding guides for your solution.

                  Encrypted Briefcase 4 Weeks Implementation

                  Encrypted Briefcase: 4 Weeks Implementation: This four-week training by Communication Square leads to a deeper understanding and implementation of data protection provided by Encrypted Briefcase and delivers complimentary email support for one year.

                  Hybrid Identity and Access Management 10-Wk Imp

                  Hybrid Identity and Access Management: 10-Wk Imp: Conterra will design a solution based on Microsoft Identity Manager (MIM) 2016 and Azure Active Directory and deploy it in your production environment either on-premises on via Azure IaaS.

                  Mobile App Innovation 1hr Briefing

                  Mobile App Innovation: 1hr Briefing: This one-hour mobile app innovation briefing from Dootrix shares best practices and helps you learn how to build a next-generation mobile app on the Azure cloud platform.

                  Running SCOM in Azure 5 Day Assessment

                  Running SCOM in Azure: 5 Day Assessment: The SCOM to Azure service assesses your System Center Operations Manager infrastructure and provides a framework to move it to Azure effectively and cost-efficiently.

                  DRaaS on Azure - 1 Week Assessment

                  SAP on Azure - 1 week Assessment: Cloud4C's SAP-certified consultants participate in a detailed assessment and workshop to define the best path for SAP migration and onboarding on Microsoft Azure.

                  Secure Communication System - 2 Week PoC

                  Secure Communication System – 2 Week PoC: This Azure-based offering is designed to reimagine how your business approaches secure communication and compliance to industry-defined standards.

                  Secure Communication System 1-Week Assessment

                  Secure Communication System: 1-Week Assessment: This assessment helps IT directors assess that their communication and collaboration are complaint, validate that access to data controls are in place and functioning properly, and confirm that company information is secure.

                  Secure Communication System 1 Hour Briefing

                  Secure Communication System - 1 Hour Briefing: Communication Square's Azure-based offering covers making, receiving, and transferring business calls in the office, at home, or on the road using your phone or PC without the need for a traditional phone system.

                  Secure Emailing System - 1-Hour Briefing

                  Secure Emailing System – 1-Hour Briefing: This briefing will address the following topics: How to protect your data no matter where it is, how to automatically classify sensitive information, and how to track and revoke access to emails and attachments.

                  Secure Emailing System - 2-Week Proof of Concept

                  Secure Emailing System – 2-Week Proof of Concept: This Azure-based offering is designed to reimagine how your business approaches securing email systems, compliance to industry-defined standards, and secure access to data.

                  Secure Emailing System 1 Week Assessment

                  Secure Emailing System: 1 Week Assessment: This assessment illustrates encryption and decryption methods, automatic data classification, protection against non-compliance, and how to combine the right set of tools, knowledge, and expertise to benefit your email security.

                  Secure Emailing System 4 Wk Implementation

                  Secure Emailing System: 4 Wk Implementation: Communication Square's Azure-based offering helps classify your data based on sensitivity, protect your data, and leverage deployment and management flexibility.

                  Azure Monitor for containers with Prometheus now in preview

                  $
                  0
                  0

                  Prometheus is a popular open source metric monitoring solution and is a part of Cloud Native Compute Foundation. We have many customers who like the extensive metrics which Prometheus provides on Kubernetes. However, they also like how easy it is to use Azure Monitor for containers which provides fully managed, out of the box monitoring for Azure Kubernetes Service (AKS) clusters. We have been receiving requests to funnel the Prometheus data into Azure Monitor and today, we are excited to share Prometheus integration with Azure Monitor for containers is now in preview and brings together the best of two worlds.

                  Flowchart illistrating how Promethues endpoints integrate with Azure Monitor for containers

                  Typically, to use Prometheus you need to setup and manage a Prometheus server with a database. With the Azure Monitor integration, no Prometheus server is needed. You just need to expose the Prometheus end-point through your exporters or pods (application), and the containerized agent for Azure Monitor for containers can scrape the metrics for you. We have provided a seamless onboarding experience to collect Prometheus metrics with Azure Monitor. The example below shows how the coredns metrics, which is part of the kube-dns-metric, is collected into Azure Monitor for logs. 

                  Screenshot example of how the coredns metrics is collected into Azure Monitor for logs

                  You can also collect workload metrics from your containers by instrumenting Prometheus SDK into your application. The example below shows the collection of the prommetrics_demo_requests_counter. You can collect workload metrics through URL, endpoints, or pod annotation as well.

                  Screenshot example showing the collection of the prommetrics_demo_requests_counter

                  Full stack monitoring with Azure Monitor for containers

                  So how does Prometheus metrics fit in with the rest of the metrics including the recently added storage and network performance metrics that Azure Monitor for containers already provides. You can see how the metrics all fit together below. Azure Monitor for containers provides out of the box telemetry at the platform, container, orchestrator level, and to an extent the workload level. With the additional workload metrics from Prometheus you now get full stack, end to end monitoring view for your Azure Kubernetes Services (AKS) in Azure Monitor for containers.

                  Image of table showing how Prometheus telemetry fits in with other metrics

                  Visualizing Prometheus metrics on Azure dashboard and alerting

                  Once the metrics are stored in Azure Monitor logs, you can query against the metrics using Log Analytics with Kusto Query Language (KQL). Here’s a sample query that instruments the Prometheus SDK.  You can quickly plot the result using queries in the Azure portal.

                  <Queries>
                  InsightsMetrics
                  | where Name == "prommetrics_demo_requests_counter_total"
                  | extend dimensions=parse_json(Tags)
                  | extend request_status = tostring(dimensions.request_status)
                  | where request_status == "bad"
                  | where TimeGenerated > todatetime('2019-07-02T09:40:00.000')
                  | where TimeGenerated < todatetime('2019-07-02T09:54:00.000')
                  | project request_status, Val, TimeGenerated | render timechart

                  Screenshot example of plotting results in Azure portal for a query that uses the embedded Prometheus library

                  You can pin the chart to your Azure dashboard and create your own customized dashboard. You can also pin your current pod and node charts to the dashboard from the Azure Monitor for container cluster view.

                  Screenshot example of pinning query result charts to the Azure or customized dashboards

                  If you would like to alert against the Prometheus metrics, you can do so using alerts in Azure

                  This has been an exciting integration for us, and we are looking to continue our effort to help our customers on monitoring Kubernetes. For more information on configuring the agent to collect Prometheus data, querying, and using the data on Azure Monitor for containers, visit our documentation. Prometheus provides rich and extensive telemetry, if you need to understand the cost implications here’s a query which will show you the data ingested from Prometheus into Azure Monitor logs.

                  For available metrics on Prometheus, please go to Prometheus website.

                  For any feedback or suggestions, please reach out to us through the techforum or stackoverflow.


                  Conversational AI updates for July 2019

                  $
                  0
                  0

                  At Build, we highlighted a few customers who are building conversational experiences using the Bot Framework to transform their customer experiences. For example, BMW discussed its work on the BMW Intelligent Personal Assistant to deliver conversational experiences across multiple canvases by leveraging the Bot Framework and Cognitive Services. LaLiga built their own virtual assistant which allows fans to experience and interact with LaLiga across multiple platforms.

                  With the Bot Framework release in July, we are happy to share new releases of Bot Framework SDK 4.5 and preview of 4.6, updates to our developer tools, and new channels in Azure Bot Service. We’ll use the opportunity to provide additional updates for the Conversational AI releases from Microsoft.

                  Bot Framework channels

                  We continue to expend channels support and functionality for Bot Framework and Azure Bot Service.

                  Voice-first bot applications: Direct Line Speech preview

                  The Microsoft Bot Framework lets you connect with your users wherever your users are. We offer thirteen supported channels, including popular messaging apps like Skype, Microsoft Teams, Slack, Facebook Messenger, Telegram, Kik, as well as a growing number of community adapters.

                  Today, we are happy to share the preview of Direct Line Speech channel. This is a new channel designed for voice-first experiences for your Bot Framework utilizing Microsoft’s Speech Services technologies.  he Direct Line Speech channel is a native implementation of speech for mobile applications and IoT devices, with support for Text-to-speech, Speech-to-text, and custom wake words.  We’re happy to share that we’re now opening the preview to all Bot Framework customers.

                  Getting started with voice support to your bot is easy. Simply update to the latest Bot Framework SDK, configure the Direct Line Speech channel for your bot, and use the Speech SDK to embed voice into your mobile application or device today.

                  Flowchart diagram showing how to utilize Direct Line speech co-located services

                  Better isolation for your bot: Direct Line App Service Extension

                  Direct Line and Webchat are used broadly by Bot Framework customers to provide chat experiences on their web pages, mobile apps, and devices. For some scenarios, customers have given us the feedback that they’d like a version of Direct Line that can be deployed in isolation, such as in a Virtual Network (VNET). A VNET lets you create your own private space in Azure and is crucial to your cloud network as it offers isolation, segmentation, and other key benefits. The Direct Line App Service Extension can be deployed as part of a VNET, allowing IT administrators to have more control over conversation traffic and improve latency in conversations due to reduction in the number of hops. Feel free to get started with Direct Line App Service Extension.

                  Bot Framework SDK

                  As part of the Bot Framework SDK 4.6 preview we updated Adaptive Dialog, which allows developers to dynamically update conversation flow based on context and events. This is especially handy when dealing with conversation context switches and interruptions in the middle of a conversation. Learn more by reading the documentation and reviewing the samples.

                  Continuing our commitment to the Open Source community and following on our promise to allow developers to use their favorite programing language, we updated Bot Framework Python SDK. The Python SDK now supports OAuth, Prompts, CosmosDB, and includes all major functionality in SDK 4.5. In addition we got new samples.

                  Addressing customers’ and developers’ ask for better testing tools, the July version of the SDK introduces a new unit testing capability. The Microsoft.Bot.Builder.testing package simplifies the process of unit testing dialogs in your bot. Check out the documentation and samples.

                  Introduced at Microsoft Build 2019, the Bot Inspector is a new feature in the Bot Framework Emulator which lets you debug and test bots on channels like Microsoft Teams, Slack, Cortana, and more. As you use the bot on specific channels, messages will be mirrored to the Bot Framework Emulator where you can inspect the message data that the bot received. Additionally, a snapshot of the bot memory state for any given turn between the channel and the bot is rendered as well.

                  Following enterprise customers asks, we put together a web chat sample for a single sign-on to enterprise apps using OAuth. In this sample, we show how to authorize a user to access resources on an enterprise app with a bot. Two types of resources are used to demonstrate the interoperability of OAuth, Microsoft Graph and GitHub API.

                  Solutions

                  Virtual agent solution accelerator

                  We updated the Virtual Assistant and associated skills to enable out-of-box support for Direct Line Speech opening voice assistant experiences with no additional steps. This includes middleware to enable control of the voice being used. Once a new Virtual Assistant has been deployed, you can follow instructions for configuring Virtual Assistant with the Direct Line Speech channel. The example test harness application is also provided to enable you to quickly and easily test Speech scenarios.

                  An Android app client for Virtual Assistant is also available which integrates with Direct Line Speech and Virtual Assistant, demonstrating how a device client can interact with your Virtual Assistant and render Adaptive Cards.

                  In addition, we have added out-of-box support for Microsoft Teams ensuring that your Virtual Assistant and skills work including authentication and adaptive cards. You can follow steps for creating the associated application manifest.

                  The Virtual Assistant Solution Accelerator provides a set of templates, solution accelerators, and skills to help build sophisticated conversational experiences. A new Android app client for Virtual Assistant that integrates with Direct Line Speech and Virtual Assistant demonstrates how a device client can interact with your Virtual Assistant and render adaptive cards. Updates also include support for Direct-Line Speech and Microsoft Teams.

                  The Dynamics 365 Virtual Agent for Customer Service preview provides exceptional customer service with intelligent, adaptable virtual agents. Customer service experts can easily create and enhance bots with AI-driven insights. The Dynamic 365 Virtual Agent is built on top of the Bot Framework and Azure.

                  Making it easier to bring your Linux based web apps to Azure App Service

                  $
                  0
                  0

                  Application development has radically changed over the years. From having to host all the physical hardware hosting the app and its dependences on-premises, to moving to a model where the hardware is hosted by external companies yet still managed by the users on to hosting your apps on a fully managed platform where all hardware and software management is done by the hosting provider. And then finally over to a full serverless solution where no resources need to be set up to run applications.

                  Table of different Web Application hosting options (On-Prem, IaaS, PaaS, and SaaS) and the balance of responsibility split between the customer and Microsoft.

                  The perception of complexity in running smaller solutions in the cloud are slowly being eradicated due to moving solutions to a managed platform, where even non-technical audiences can manage their application in the cloud.

                  A great example in the managed platform realm is Azure App Service. Azure App Service provides an easy way to bring source code or containers and deploy full web apps in minutes, with the ease of configuration settings at the hands of the app owner. Built in features such as secure sockets layer (SSL) certificates, custom domains, auto-scaling, setting up a continuous integration and deployment (CI/CD) pipeline, diagnostics, troubleshooting, and much more, provides a powerful platform for full cycle build and management of the applications. Azure App Service also abstracts all of the infrastructure and its management overhead away from the users, maintaining the physical hardware running the service, patching security vulnerabilities, and continuously updating the underlying operating system.

                  Even in the managed platform world where customers shouldn’t care about the underlying platform they are physically running on, the reality is that some applications, depending on their framework, perform better on a specific operating system. This is the reason the team is putting a lot of work into the Linux hosting offering and making it easier to try it out. This includes our recent announcement about the free tier for Linux web apps, making it quick and simple to try out the platform with no commitments.

                  We’re excited to introduce a promotional price on the Basic app service plan for Linux, which depending on regional meters in your datacenter of choice, leads to a 66 percent price drop!

                  You can use the free tier to test the platform out, and then move up to the Basic tier and enjoy more of the platform’s capabilities. You can host many frameworks on this tier, including WordPress sites, Node.js, Python, Java, and PHP sites, and one of the most popular options that we’ve seen on the Linux offering – custom docker containers. Running a container hosted in Azure App Service provides an easy on-ramp for customers wanting to enjoy a fully managed platform, but also want a single deployable artifact containing an app and all of its dependencies, or want to work with a custom framework or version beyond the defaults built into the Azure App Service platform.

                  You can even use the Linux offering with networking solutions to secure your app using the preview feature of Azure virtual networks (VNet) integration to connect to an on-premise database, or to call into an Azure virtual network of your choice. You may also use access restrictions to control where your app may receive traffic from and place additional safeguards on the platform level.

                  What now? If you have a web workload you’re thinking of taking to the next level, try out Azure App Service now! Explore all of the possibilities waiting for you as you host your code or container on a managed platform that currently hosts more than two million sites!

                  Create your free Azure trial today.

                  Post on the Microsoft Developer Network forum for questions about Azure App Service.

                  If you have a feature suggestion for the product, please enter it in the feedback forum.

                  Silo busting 2.0—Multi-protocol access for Azure Data Lake Storage

                  $
                  0
                  0

                  Cloud data lakes solve a foundational problem for big data analytics—providing secure, scalable storage for data that traditionally lives in separate data silos. Data lakes were designed from the start to break down data barriers and jump start big data analytics efforts. However, a final “silo busting” frontier remained, enabling multiple data access methods for all data—structured, semi-structured, and unstructured—that lives in the data lake.

                  Providing multiple data access points to shared data sets allow tools and data applications to interact with the data in their most natural way. Additionally, this allows your data lake to benefit from the tools and frameworks built for a wide variety of ecosystems. For example, you may ingest your data via an object storage API, process the data using the Hadoop Distributed File System (HDFS) API, and then ingest the transformed data using an object storage API into a data warehouse.

                  Single storage solution for every scenario

                  We are very excited to announce the preview of multi-protocol access for Azure Data Lake Storage! Azure Data Lake Storage is a unique cloud storage solution for analytics that offers multi-protocol access to the same data. Multi-protocol access to the same data, via Azure Blob storage API and Azure Data Lake Storage API, allows you to leverage existing object storage capabilities on Data Lake Storage accounts, which are hierarchical namespace-enabled storage accounts built on top of Blob storage. This gives you the flexibility to put all your different types of data in your cloud data lake knowing that you can make the best use of your data as your use case evolves.

                  image

                  Single storage solution

                  Expanded feature set, ecosystem, and applications

                  Existing blob features such as access tiers and lifecycle management policies are now unlocked for your Data Lake Storage accounts. This is paradigm-shifting because your blob data can now be used for analytics. Additionally, services such as Azure Stream Analytics, IoT Hub, Azure Event Hubs capture, Azure Data Box, Azure Search, and many others integrate seamlessly with Data Lake Storage. Important scenarios like on-premises migration to the cloud can now easily move PB-sized datasets to Data Lake Storage using Data Box.

                  Multi-protocol access for Data Lake Storage also enables the partner ecosystem to use their existing Blob storage connector with Data Lake Storage.  Here is what our ecosystem partners are saying:

                  “Multi-protocol access for Azure Data Lake Storage is a game changer for our customers. Informatica is committed to Azure Data Lake Storage native support, and Multi-protocol access will help customers accelerate their analytics and data lake modernization initiatives with a minimum of disruption.”

                  - Ronen Schwartz, Senior Vice President and General Manager of Data Integration, Big Data, and Cloud, Informatica

                  You will not need to update existing applications to gain access to your data stored in Data Lake Storage. Furthermore, you can leverage the power of both your analytics and object storage applications to use your data most effectively.Graph displaying multi-protocol access that enables storage features, Azure ecosystem, partner ecosystem, and custom applications.

                  Multi-protocol access enables features and ecosystem

                  Multiple API endpoints—Same data, shared features

                  This capability is unprecedented for cloud analytics services because not only does this support multiple protocols, this supports multiple storage paradigms. We now bring you this powerful capability to your storage in the cloud. Existing tools and applications that use the Blob storage API gain these benefits without any modification. Directory and file-level access control lists (ACL) are consistently enforced regardless of whether an Azure Data Lake Storage API or Blob storage API is used to access the data.  

                  Both the Blob storage API and Azure Data Lake Storage API go through the Hierarchical Namespace, which is built on top of Blob storage.

                  Multi-protocol access on Azure Data Lake Storage

                  Features and expanded ecosystem now available on Data Lake Storage

                  Multi-protocol access for Data Lake Storage brings together the best features of Data Lake Storage and Blob storage into one holistic package. It enables many Blob storage features and ecosystem support for your data lake storage.

                  Features More information
                  Access tiers Cool and Archive tiers are now available for Data Lake Storage. To learn more, see the documentation “Azure Blob storage: hot, cool, and archive access tiers.”
                  Lifecycle management policies You can now set policies to a tier or delete data in Data Lake Storage. To learn more, see the documentation “Manage the Azure Blob storage lifecycle.”
                  Diagnostics logs Logs for the Blob storage API and Azure Data Lake Storage API are now available in v1.0 and v2.0 formats. To learn more, see the documentation "Azure Storage analytics logging."
                  SDKs Existing blob SDKs can now be used with Data Lake Storage. To learn more, see the below documentation:
                  PowerShell PowerShell for data plane operations is now available for Data Lake Storage. To learn more, see the Azure PowerShell quickstart.
                  CLI Azure CLI for data plane operations is now available for Data Lake Storage. To learn more, see the Azure CLI quickstart.
                  Notifications via Azure Event Grid You can now get Blob notifications through Event Grid. To learn more, see the documentation “Reacting to Blob storage events.” Azure Data Lake Storage Gen2 notifications are currently available.

                   

                  Ecosystem partner More information
                  Azure Stream Analytics Azure Stream Analytics now writes to, as well as reads from, Data Lake Storage.
                  Azure Event Hubs capture The capture feature within Azure Event Hubs now lets you pick Data Lake Storage as one of its destinations.
                  IoT Hub IoT Hub message routing now allows routing to Azure Data Lake Storage Gen 2.
                  Azure Search You can now index and apply machine learning models to your Data Lake Storage content using Azure Search.
                  Azure Data Box You can now ingest huge amounts of data from on-premises to Data Lake Storage using Data Box.

                  Please stay tuned as we enable more Blob storage features using this amazing capability.

                  Next steps

                  All these new capabilities are available today in West US 2 and West Central US. Sign up for the preview today. For more information, please see our documentation on multi-protocol access for Azure Data Lake Storage.

                  New ways to train custom language models – effortlessly!

                  $
                  0
                  0

                  Video Indexer (VI), the AI service for Azure Media Services enables the customization of language models by allowing customers to upload examples of sentences or words belonging to the vocabulary of their specific use case. Since speech recognition can sometimes be tricky, VI enables you to train and adapt the models for your specific domain. Harnessing this capability allows organizations to improve the accuracy of the Video Indexer generated transcriptions in their accounts.

                  Over the past few months, we have worked on a series of enhancements to make this customization process even more effective and easy to accomplish. Enhancements include automatically capturing any transcript edits done manually or via API as well as allowing customers to add closed caption files to further train their custom language models.

                  The idea behind these additions is to create a feedback loop where organizations begin with a base out-of-the-box language model and improve its accuracy gradually through manual edits and other resources over a period of time, resulting with a model that is fine-tuned to their needs with minimal effort.

                  Accounts’ custom language models and all the enhancements this blog shares are private and are not shared between accounts.

                  In the following sections I will drill down on the different ways that this can be done.

                  Improving your custom language model using transcript updates

                  Once a video is indexed in VI, customers can use the Video Indexer portal to introduce manual edits and fixes to the automatic transcription of the video. This can be done by clicking on the Edit button at the top right corner of the Timeline pane of a video to move to edit mode, and then simply update the text, as seen in the image below.

                   

                  An image showing the ability to edit text in the Timeline pane.

                  The changes are reflected in the transcript, captured in a text file From transcript edits, and automatically inserted to the language model used to index the video. If you were not already using a customer language model, the updates will be added to a new Account Adaptations language model created in the account.

                  You can manage the language models in your account and see the From transcript edits files by going to the Language tab in the content model customization page of the VI website.

                  Once one of the From transcript edits files is opened, you can review the old and new sentences created by the manual updates, and the differences between them as shown below.

                  Using the 'From transcript edits' file to review sentence changes.

                  All that is left is to do is click on Train to update the language model with the latest changes. From that point on, these changes will be reflected in all future videos indexed using that model. Of course, you do not have to use the portal to train the model, the same can be done via the Video Indexer train language model API. Using the API can open new possibilities such as allowing you to automate a recurring training process to leverage ongoing updates.

                  The Content model customization screen.

                  There is also an update video transcript API that allows customers to update the entire transcript of a video in their account by uploading a VTT file that includes the updates. As a part of the new enhancements, when a customer uses this API, Video Indexer also adds the transcript that the customers uploaded to the relevant custom model automatically in order to leverage the content as training material. For example, calling update video transcript for a video titled "Godfather" will result with a new transcript file named “Godfather” in the custom language model that was used to index that video.

                  Improving your custom language model using closed caption files

                  Another quick and effective way to train your custom language model is to leverage existing closed captions files as training material. This can be done manually, by uploading a new closed caption file to an existing model in the portal, as shown in the image below, or by using the create language model and update language model APIs to upload a VTT, SRT or TTML files (similarly to what was done until now with TXT files.)

                   

                  The Content model customization screen, with flies added.

                  Once uploaded, VI cleans up all the metadata in the file and strip it down to the text itself. You can see the before and after results in the following table.

                   

                  Type Before After
                  VTT

                  NOTE Confidence: 0.891635
                  00:00:02.620 --> 00:00:05.080
                  but you don't like meetings before 10 AM.

                  but you don’t like meetings before 10 AM.
                  SRT

                  2
                  00:00:02,620 --> 00:00:05,080
                  but you don't like meetings before 10 AM.

                  but you don’t like meetings before 10 AM.
                  TTML

                  <!-- Confidence: 0.891635 -->
                  <p begin="00:00:02.620" end="00:00:05.080">but you don't like meetings before 10 AM.</p>

                  but you don’t like meetings before 10 AM.

                  From that point on, all that is left to do is review the additions to the model and click Train or use the train language model API to update the model.

                  Next Steps

                  The new additions to the custom language models training flow make it easy for you and your organization to get more accurate transcription results easily and effortlessly. Now, it is up to you to add data to your custom language models, using any of the ways we have just discussed, to get more accurate results for your specific content next time you index your videos.

                  Have questions or feedback? We would love to hear from you! Use our UserVoice page to help us prioritize features, or email VISupport@Microsoft.com for any questions.

                  Direct Line with speech now available in preview

                  $
                  0
                  0

                  With over 360,000 registered Azure Bot Service developers, we’ve seen significant growth in bots and virtual assistants built on Azure. A major trend we’re following is the growing need for these assistants to support voice-first conversational experiences. As a result, we’re taking steps to make it even easier for developers to build virtual assistants with our virtual assistant solution accelerator and to add speech to their conversational applications with Azure Bot Service.

                  At this year’s Microsoft Build conference, we announced signup availability of the Direct Line Speech channel, which simplifies the creation of end-to-end solutions for voice-first conversational experiences. Today, we’re happy to share that the Direct Line Speech channel is now in preview for any developer with no additional signup or approval required. With this release, the Direct Line Speech channel has also significantly expanded its region support to enable faster and more reliable conversational experiences worldwide.

                  About Direct Line Speech

                  Flowchart diagram showing how to utilize Direct Line speech co-located services

                  Direct Line Speech is a new channel that simplifies the creation of end-to-end solutions for voice-in and voice-out natural user interfaces with a few key components:

                  • An on-device API, available as part of the Speech SDK, simplifies speech and real-time supplementary signal communication to and from a compatible bot.
                  • A cloud service that coordinates wake word verification, speech-to-text, and text-to-speech capabilities for use with your conversational experience.
                  • A new Microsoft Bot Framework channel optimized for low-latency, high-reliability communication in voice-first scenarios.

                  The Direct Line Speech channel is designed to allow deep customization of your virtual assistant or conversational experience to fit your requirements and your brand, including the use of custom wake words to give your assistant its own unique name.

                  Getting started

                  The Direct Line Speech channel is available for preview use today. Use an existing bot or create a new one. Visit the Azure portal to connect it to the Direct Line Speech channel, then get started with a device application for access to a variety of platforms and programming languages available. We’re excited to see what you can create and eager to hear your feedback.

                  Viewing all 5971 articles
                  Browse latest View live


                  <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>