Quantcast
Channel: Category Name
Viewing all 5971 articles
Browse latest View live

Updates to geospatial functions in Azure Stream Analytics – Cloud and IoT edge

$
0
0

Azure Stream Analytics is a fully managed PaaS service that helps you run real-time analytics and complex event processing logic on telemetry from devices and applications. Numerous built-in functions available in Stream Analytics helps users build real-time applications using simple SQL language with utmost ease. By using these capabilities customers can quickly realize powerful applications for scenarios such as fleet monitoring, connected cars, mobile asset tracking, geofence monitoring, ridesharing, etc.

Today, we are excited to announce several enhancements to geospatial features. These features will help customers manage a much larger set of mobile assets and vehicle fleet easily, accurately, and more contextually than previously possible. These capabilities are available both in the cloud and on Azure IoT edge.

Here is a quick run-down of the new capabilities:

Geospatial indexing

Previously, to track ‘n’ number of assets in streaming data across ‘m’ number of geofence reference data points, in the geospatial context, translated into a cross join of every reference data entry with every streaming event thus resulting in an O(n*m) operation. This presented scale issues in scenarios where customers need to manage thousands of assets across hundreds of sites.

To address this limitation, Stream Analytics now supports indexing geospatial data in relevant queries. When indexed, geospatial data is joined with streaming events. Instead of generating a cross join of every streaming event with reference data, an index is created with the reference data of geospatial objects and every lookup is optimized using the index. This will enable a faster reference data lookup to O(n * log m), thereby offering support for scale that is magnitudes of order higher than what was previously possible.

Support for WKT format

GeoJSON is an open standard format designed for representing simple geographical features, along with their non-spatial attributes, based on JavaScript Object Notation. Previously, Azure Stream Analytics, did not extend support for all the types otherwise defined in GeoJSON specification. As a result, users could not successfully export some of their geospatial objects and process them in Stream Analytics.

To remedy this gap, we are adding full support for WKT geospatial format in Stream Analytics. This format is natively supported by Microsoft SQL Server and hence can be readily used in reference data to represent specific geospatial entities or attributes. This will enable users to easily export their data into WKT and add each entry as nvarchar(max).

Geometry based calculations

Previously, in Stream Analytics we implemented Geographical calculations without the possibility of geometric projections. This would mean that users would ingress projected coordinates and expect calculations to follow geometric projections. Unfortunately, in many cases the output would not match their expectation as calculations were based on geography and were ignoring projections.

To help users overcome this limitation and to allow full fidelity projected calculations, we are moving away from geographic based computation and towards geometric calculations. This means that developers can now input their projected geo coordinates using the same functions as before, but the output will preserve their projection properties. That said, ST_DISTANCE function will continue to be the only function over geography.

Get started today

We’re excited for you to try out geospatial functions in Azure Stream Analytics. To try this new feature please set you job compatibility level to 1.2 and refer to the documented tutorials.


IoT in Action: Enabling cloud transformation across industries

$
0
0

The intelligent cloud and intelligent edge go hand-in-hand, and together they are sparking massive transformation across industries. As computing gets more deeply embedded in the real world, powerful new opportunities arise to transform revenue, productivity, safety, customer experiences, and more. According to a white paper by Keystone Strategy, digital transformation leaders generate 8 percent more per year in operating income than other enterprises.

But what does cloud transformation look like within the context of the Internet of Thing (IoT)?

Below I’ve laid out a typical cloud transformation journey and provided examples of how the cloud is transforming city government, industrial IoT, and oil and gas innovators. For a deep dive on this very topic, I hope you’ll join me and a whole host of cloud and IoT experts, and Microsoft partners and customers at the upcoming IoT in Action event in Houston.

IoT in Action event in Houston.

The typical cloud transformation journey

As mentioned, the cloud is a vital piece of IoT. Below I’ve outlined a typical cloud journey.

  1. Embrace an innovation mindset: The first part of the cloud transformation journey—and this applies to digital transformation in general—is building a culture and mindset that is willing to innovate, and welcomes change and the potential it brings. This must start with leadership. If leadership doesn’t set the example of an innovation mindset, it will be difficult to achieve buy-in internally.
  2. Clarify rationale for a cloud move: Typically, these reasons are plentiful such as cost savings, greater availability, and better performance. Understanding rationale from a strategic standpoint and aligning with your overall business goals can help you focus your efforts and find the right cloud fit.
  3. Determine which applications to modernize and migrate: Prioritizing applications and determining which ones need to be migrated is also key. Migration is an opportunity for modernization of the IT ecosystem, which can ultimately save time and money. Making a prioritized plan and budgeting for modernization needs is critical.
  4. Expect cloud usage (and costs) to rise: After the initial migration, cloud consumption typically increases. Due to easy access and relatively low-cost, developers and administrators will consume more resources, developing new applications and solutions.
  5. But then it levels out: As an organization gets a clear understanding around its actual cloud consumption, it will be able to prioritize its workloads, bring some workloads back on premise, and negotiate pricing models. Implementing governance processes will help to control costs and ensure optimal performance.

Below I’ve included a few snapshots that show how the cloud transformation journey is paying off for city government, manufacturers, and the oil and gas industry.

Smart cites and the cloud journey

What do flood detection sensors, firefighting drones, transit wi-fi, and smart water meters have in common? They’re cloud connected.

Houston is on a mission to connect its citizens to the city and the city to its citizens. In the wake of massive Hurricane Harvey destruction, the city is doing more than just rebuilding: it is working to become safer, more resilient, and more connected.

To that end, the City of Houston is working with Microsoft and Microsoft partners to leverage cloud transformation and build repeatable, IoT solutions that span transportation, public safety, disaster recovery and response, connected neighborhoods, smart buildings, and more. A shared vision and strong collaboration from city leaders have been crucial to the success of this massive undertaking.

Learn more about the Microsoft and Houston initiative for details around how Houston is embracing cloud transformation to take care of its citizens.

Industrial IoT and the cloud journey

Industrial organizations are also leveraging digital and cloud transformation. By combining cloud with IoT, manufacturers are able to streamline, increase productivity, and predict issues before they happen. They’re even able to offer new service lines.

Rolls-Royce is a fantastic example of a manufacturer that has embraced cloud transformation to create a valuable service that helps its customers minimize costly delays and maximize fuel efficiency. With more than 13,000 commercial aircraft engines in service worldwide, Rolls Royce uses data from equipment sensors to help airlines predict and plan for maintenance needs and increase fuel economy.

The solution relies on the Microsoft Azure platform and Azure IoT solution accelerators to help filter, synthesize, and analyze massive volumes of data, delivering actionable insights to the right stakeholders at the right time. According to Michael Chester, Product Manager Data Services, Rolls-Royce, “By looking at wider sets of operating data and using machine learning and analytics to spot subtle correlations, we can optimize our models and provide insight that might improve a flight schedule or a maintenance plan and help reduce disruption for our customers.”

Oil and gas IoT and the cloud journey

A shifting competitive landscape, price volatility, technology, and other factors are reshaping the oil and gas industry. Areas of transformation include field empowerment, operations, and industry innovation. Foundational to success is digital transformation.

XTO Energy, a subsidiary of ExxonMobil knows firsthand the importance of digital and cloud transformation. One of the challenges they faced was that the existing infrastructure where they have major holdings didn’t lend itself to collecting data.

Recognizing the need to modernize and use data to drive better decisions, they deployed a series of intelligent cloud and intelligent edge solutions that have helped them keep tabs on well heads. Using the Microsoft Azure platform and Azure IoT technologies, they collect, store, and analyze data, giving XTO Energy new insights into well operations and future drilling possibilities.

According to Brian Khoury, IoT and Data Architecture Supervisor at XTO Energy, “We recognize the need to further digitize and to use data as an asset that drives insights and solves problems that we couldn’t solve when information is confined to physical paper or siloed across departments. Oil and gas tends to be behind in the use of digital tools compared to other industries, so we’re working hard to be more digitally enabled and connected. Embracing the cloud is an important part of that effort because it frees us up from having to manage hardware, storage, servers—all things that aren’t our core business—and we can scale and spin up resources as needed.”

IoT in Action comes to Houston April 16, 2019

The intelligent cloud and intelligent edge present powerful opportunities across industries. Please join us for a one-day IoT in Action event in Houston. This event is a unique opportunity to explore innovative, scalable IoT solutions that enable cloud transformation across industries – from city government to industrial IoT solution providers and oil and gas innovators. It’s also a great way to connect with experts and network with other Microsoft partners and customers to explore opportunities around the intelligent edge and intelligent cloud.

Azure Sphere Retail and Retail Evaluation feeds

$
0
0

Azure Sphere developers might have noticed that we now have two Azure Sphere OS feeds where once there was only one. The Azure Sphere Preview feed that delivered over-the-air OS updates has been replaced by feeds named Retail Azure Sphere OS and Retail Evaluation Azure Sphere OS. What’s the difference and what does it mean for you?

The Retail feed provides a production-ready OS and is intended for broad deployment to end-user installations. The Retail Evaluation feed provides each new OS for 14 days before we release it to the Retail feed. It is intended for backwards compatibility testing.

At the 19.02 release, both feeds delivered the same OS. The 19.03 quality update was released to the Retail Evaluation feed on March 14, 2019 and was promoted to the Retail feed on March 28, 2019. Future releases will similarly be made available on the Retail Evaluation feed for 14 days before they are promoted to the Retail feed.

What’s the value to you?

We’ve designed Azure Sphere for easy updates so that new versions of the OS can be deployed to customer sites without manual intervention. However, we recognize that you want an opportunity to verify your existing applications before your customers receive the new OS. The 14-day evaluation period lets you check that everything works as you expect.

Application binaries that are built only with production APIs from a given OS release will be compatible with all subsequent OS releases. To evaluate the new OS, we recommend that you assign one or more devices to a separate Retail Evaluation device group that is configured to receive the Retail Evaluation feed. Using the devices in this group as “canaries,” you can run your applications and OTA application deployments against the new OS version.

If you encounter problems, please notify us immediately through your Microsoft technical account manager (TAM) so that we can address any issues.

Get started with Azure Sphere

The best way to learn more about the Azure Sphere Retail and Retail Evaluation feeds is by connecting an Azure Sphere devkit or module to the network. If you haven’t already started building with Azure Sphere, you can get started quickly with modules that meet your needs from our ecosystem of Azure Sphere partners. To learn more, view the on-demand Azure Sphere Ecosystem Expansion webinar.

Microsoft Azure portal April 2019 update

$
0
0

This month’s updates include improvements to IaaS, Azure Data Explorer, Security Center, Recovery Services, Role-Based Access Control, Support, and Intune.

Sign in to the Azure portal now and see for yourself everything that’s new. Download the Azure mobile app to stay connected to your Azure resources anytime, anywhere.

Here’s the list of April updates to the Azure portal:

IaaS

Azure Data Explorer

Security Center

Azure Site Recovery

Role-Based Access Control

Support

Other

IAAS

Improved create experience for Managed Disks

Managed disks now have the latest UI pattern for creating resources in Azure. This updated flow eliminates horizontal scrolling during the creation workflow and follows the same UI patterns that we use in other popular services like VM, Storage, Cosmos DB and AKS, resulting in easier to learn and better customer experiences.

Disk create screenshot

Use of non-ASCII characters for virtual machine names

We loosened the restrictions on the characters you can use to name a virtual machine in the portal to include non-ASCII characters. Azure virtual machine naming in the portal is constrained by two sets of rules: Azure resource naming rules and guest operating system hostname naming rules, which can be more restrictive. With this release, we allow more Unicode characters in the virtual machine name, which is used as both the Azure resource name and the guest hostname. While the Azure resource name is immutable, you can update the in-guest hostname after the VM is created.

Non ascii screenshot

Azure Data Explorer

New full-screen Create Cluster experience

We've changed the way users create clusters. The new experience contains the new UX pattern of "review + create" which appears in several Azure products.

Data explorer screenshot

Data explorer screenshot

Security Center

Public preview: Adaptive network hardening

Azure Security Center can now learn the network traffic and connectivity patterns of your Azure workload and provide you with network security group (NSG) rule recommendations for your internet-facing virtual machines. This is called adaptive network hardening, and it's now in public preview. It helps you secure connections to and from the public internet (made by workloads running in the public cloud), which are one of the most common attack surfaces.

It can be hard to know which NSG rules should be in place to make sure that Azure workloads are available only to required source ranges. These new recommendations in Security Center help you configure your network access policies and limit your exposure to attacks. Security Center uses machine learning to fully automate this process, including an automated enforcement mechanism. These recommendations also use Microsoft’s extensive threat intelligence reports to make sure that known malicious actors are blocked.

To view these recommendations, in the Security Center portal, select Networking and then Adaptive network hardening.

Adaptive application control updates

In Azure Security Center, adaptive application control in audit mode is now available for Azure Linux VMs. This whitelisting solution is also available for non-Azure Windows and Linux VMs and servers that are connected to Security Center.

In addition, you can now rename groups of virtual machine and server clusters in Security Center. They're still automatically named group1, group2, and so on. But you can then edit them to provide a more meaningful name to your machine cluster groups to help you better represent those application control policy groups. Learn more about automated end-to-end application control in Security Center by visiting our documentation, “Adaptive application controls in Azure Security Center.”

Support for virtual network peering

The network map in Azure Security Center now supports virtual network peering. You can view directly from the network map allowed traffic flows between peered virtual networks and deep dive into the connections and entities.

Secure score impact changes

In Azure Security Center, the number for secure score impact represents how much your overall secure score will improve if you follow recommendations.

Security Center fine tunes the score of the recommendations, continuously adjusting them to make sure they reflect the necessary prioritization. As part of this effort, the secure score has changed for several recommendations. The change might affect your overall secure score. You can learn more about secure score by visiting our documentation, “Improve your secure score in Azure Security Center.”

Azure Site Recovery

Replication to managed disks

Azure Site Recovery (ASR) now supports disaster recovery of VMware virtual machines and physical servers by directly replicating to Managed Disks. All new protections now have this capability available on the Azure portal. In order to enable replication for a machine, you no longer need to create storage accounts. For more details, refer to the announcement blog post, “Simplify disaster recovery with Managed Disks for VMware and physical servers.”

Recovery screenshot

Role-based access control

New Classic administrators tab

If you are still using the classic deployment model, we've consolidated the management of Co-administrators on a new tab named Classic administrators. If you need to add or remove Co-administrators, you can use this new tab. To learn more about this tab, see Azure classic subscription administrators.

Classic screenshot

To see the new Classic administrators tab:

  1. In the Azure portal, select All services and then Subscriptions.
  2. Select your subscription.
  3. Select Access control (IAM) and then the Classic administrators tab.

Support

Updated support request experience

We have updated the support request creation experience, improving screen real estate usage and creating better interaction patterns.

Basics screenshot

During support case creation, customers can take advantage of our rich self-help content and diagnostics to troubleshoot their issues and get immediate solutions to their problems. The self-help and troubleshooting steps are available to all customers, including those that have not purchased a technical support plan with Microsoft.

Solutions screenshot

Other

Updates to Microsoft Intune

The Microsoft Intune team has been hard at work on updates as well. You can find the full list of updates to Intune on the “What's new in Microsoft Intune” page, including changes that affect your experience using Intune.

Azure portal “how to” video series

Have you checked out our Azure portal “how to” video series yet? The videos highlight specific aspects of the portal so you can be more efficient and productive while deploying your cloud workloads from the portal. Recent videos include a demonstration of how to create a storage account and upload a blob and how to create an Azure Kubernetes Service cluster in the portal. Keep checking our playlist on YouTube for a new video each week.

Next steps

The Azure portal’s large team of engineers always wants to hear from you, so please keep providing us with your feedback in the comments section below or on Twitter @AzurePortal.

Don’t forget to sign in the Azure portal and download the Azure mobile app today to see everything that’s new. See you next month!

Self-service exchange and refund for Azure Reservations

$
0
0

Azure Reservations provide flexibility to help meet your evolving needs. You can exchange a reservation for another reservation of the same type, and you can refund a reservation if you no longer need it.

Exchange an existing reserved instance

You start the exchange in the Azure portal with Azure Reservations.

1. Select the reservations that you want to refund and choose Exchange.

2. Select the SKU you want to purchase and provide quantity. Make sure that the new purchase total is more than the return total. Determine the right size before you purchase.

cid:image003.png@01D4E0D4.BCFE54A0

3. Review and complete the transaction.

For refunding a reservation, go to reservation details and select Refund.

How the return and exchange transactions are processed

First, Microsoft cancels the existing reservation and refunds the pro-rated amount for that reservation. If there is an exchange, the new purchase is processed. Microsoft processes refunds using one of the following methods, depending on your account type and payment method:

Refund processing for enterprise agreement customers

If the original purchase was made using a monetary commitment, then the money is added back to the monetary commitment for both exchange and refunds. Any overage invoices since the original purchase are re-opened and re-rated to make sure that the monetary commitment is used. If the monetary commitment term using the reservation was purchased and is no longer active, then credit will be added to your current enterprise agreement monetary commitment term.

If the original purchase was made as overage, we issue a credit memo.

Refund processing for pay-as-you-go customers with invoice payment method and Cloud solution provider program

The original reservation purchase invoice is cancelled and then a new invoice is created for the refund. For exchange the new invoice has both the refund and the new purchase. The refund amount is adjusted against the purchase. If you only refunded a reservation, then the prorated amount stays with Microsoft and it is adjusted against a future reservation purchase.

Refund processing for pay-as-you-go customers who use credit card payment method

The original invoice is cancelled and a new invoice is created. The money is refunded to the credit card that was used for the original purchase. If you’ve since changed your card, please contact support.

Exchange policies

  • You can return multiple existing reservations to purchase a new reservation of the same type. You can’t exchange reservations of one type for another. For example, you can’t return a virtual machine (VM) reservation to purchase a SQL reservation.
  • Only reservation order owners can process an exchange. Learn how to add or change users who can manage a reservation.
  • An exchange is processed as a refund and repurchased, different transactions are created for the cancellation and the new purchase. The pro-rated reservation amount is refunded for the reservations that you trade-in. You are charged fully for the new purchase. The pro-rated reservation amount is the daily pro-rated residual value of the reservation being returned.
  • Reservations can be exchanged or refunded even if the enterprise agreement using which the reservation was purchased has expired and has since renewed into a new enterprise agreement.
  • You can change any reservation property such as size, region, quantity, and term with the exchange.
  • The new purchase total should equal or be greater than the returned amount.
  • The new reservation purchased as part of exchange has a new term starting from the time of exchange.
  • There is no penalty or annual limits for exchanges.

Refund policies

  • Your total refund is subject to a maximum amount within a 12-month rolling window. To learn more, refer to our refund policies.
  • Only reservation order owners can process a refund. Learn how to add or change users who can manage a reservation.
  • Microsoft reserves the right to charge a 12 percent penalty for any returns, although the penalty is not currently charged.

Exchanging a reservation purchased for a VM size that doesn’t support premium storage for VM size that supports premium storage

In order to exchange reservations purchased from VM sizes that don’t support premium storage, to corresponding VM sizes that do support premium storage, go to the reservation details and select Exchange. Such an exchange doesn’t reset the term of the reserved instance or lead to a new transaction.

Visual Studio 2019 .NET productivity

$
0
0

Your friendly neighborhood .NET productivity team (aka. Roslyn) focuses a lot on improving the .NET coding experience. Sometimes it’s the little refactorings and code fixes that really improve your workflow. You may have seen many improvements in the previews, but for all of you who were eagerly awaiting the GA release here’s a few features you may enjoy!

Tooling improvements

I’m most excited about the new Roslyn classification colors. Visual Studio Code colors received high praise so we incorporated similar color schemes into Visual Studio. Your code editor is now just a little more colorful. Key words, user methods, local variables, parameter names, and overloaded operators all get new colors. You can even customize the colors for each syntax classifications in Tools > Options > Environment > Fonts and Colors and scroll to ‘User Members’.

New roslyn classification colors

At the bottom of files in your editor are the document health indicators as well as our code cleanup icon. The document health indicators let you know at a glance how many errors and warnings are present in the file you currently have open. You can click on the code cleanup icon to apply code style rules specified in Tools > Options or, if you have an editorconfig file that shares one code style across your team, it will apply styles specified in that file.

You can edit sdk-style project files with a simple double-click! You can also view these project files with preview in GoToAll (Ctrl+t) navigation and search the contents for file references.

Load a subset of projects in your solution with filtered solutions! You can now unload projects and save a .slnf file that will only open the projects you specified. This helps you get to the code you are interested in quickly without needing to load an entire solution.

Open only a subset of projects in a solution with solution filters

Find all references categorizes by reference type. You can filter by read/write in the new ‘Kind’ column in the find all references window.

Filter references by Read/Write with Find All References

Run code style formatting over the entire solution at the command-line with the dotnet format global tool.

Intellicode is an extension offering smarter intellisense completion with machine-learning trained models run over 2,000 open source .NET repositories on GitHub.

Intellicode offers smarter suggestions based on your scenario

Now the omnibus of new code fixes and refactorings!

 

Foreach to LINQ

 

Add missing reference for unimported types

 

Sync namespace and folder name

 

Invert conditional expressions

 

Pull members up dialog for promoting members to an interface

 

Wrap/indent/align parameters/arguments

 

Remove unused expression values and parameters

 

This is a set of highlights of what’s new in Visual Studio 2019, for a complete list see the release notes. As always, I would love your feedback via twitter, on GitHub, or in the comments section below. Also, one important thing to note is that to use .NET Core 3.0 Preview you will need to download and install the SDK, it is not included with the Visual Studio 2019 installer yet.

The post Visual Studio 2019 .NET productivity appeared first on .NET Blog.

.NET Framework April 2, 2019 Cumulative Update for Windows 10 version 1809 and Windows Server 2019

$
0
0

Today, we released the March 2019 Update for Windows 10 version 1809 and Windows Server 2019.

Quality and Reliability

This release contains the following quality and reliability improvements.

CLR

    • Addresses an issue in which the Framework throws an exception if the year in the parsed date is greater than or equal to the start year of the next era. Now, the framework will not throw such exception. [603100]
    • Updates Japanese Era dates that are formatted for the first year in an era and for which the format pattern uses the “y年” characters. The format of the year together with the symbol “元” is supported instead of using year number 1. Also, formatting day numbers that include “元” is supported. [646179]
    • Allows the output of Gannen characters in Japanese Era formatting of first year dates regardless of whether the format pattern includes single quotation marks around the “年” character. [777182]

Getting the Update

The Update is available via Windows Server Update Services and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. The following table is for Windows 10 version 1809 and Windows Server 2019.

Product Version Update KB
Windows 10 1809 (October 2018 Update)
Windows Server 2019

.NET Framework 3.5, 4.7.2
Catalog
4489192

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

The post .NET Framework April 2, 2019 Cumulative Update for Windows 10 version 1809 and Windows Server 2019 appeared first on .NET Blog.

Announcing the Azure Functions Premium plan for enterprise serverless workloads

$
0
0

We are very excited to announce the Azure Functions Premium plan in preview, our newest Functions hosting model! This plan enables a suite of long requested scaling and connectivity options without compromising on event-based scale. With the Premium plan you can use pre-warmed instances to run your app with no delay after being idle, you can run on more powerful instances, and you can connect to VNETs, all while automatically scaling in response to load.

Huge thanks to everyone that participated in our private preview! Symantec Corporation and Volpara Solutions are just a few of the companies that will benefit from the new features of the Premium plan.

See below of a comparison of how the Premium plan improves on our existing dynamically scaling plan, the Consumption Plan.SKU Comparison table including the Consumption plan and the Premium plan in preview.

Advanced scale controls enable customized deployments

Instance size can now be specified with the Premium plan. You can select up to four D-series cores and 14 GB of memory. These instances are substantially more powerful than the A-series instances available to functions using the Consumption plan, allowing you to run much more CPU or memory intensive workloads in individual invocations.

Available Instance sizes

Instance size graphic displaying details for EP1, EP2, and EP3.

Maximum Instances can now also be specified with the Premium plan. This is one of the most highly requested features and allows you to limit the maximum scale out of your Premium plan. Restricting max scale out can protect downstream resources from being overwhelmed by your functions and allows you to predict your maximum possible bill each month.

Minimum Instances can be specified in the Premium plan to allow you to pre-scale your application ahead of predicted demand. If you suspect an email campaign, sale, or any time gated event will cause your app to scale faster than it can replenish pre-warmed instances. You can increase your minimum instances to pre-load capacity.

We’ve built a sample Durable Function that will move any function between the Consumption and Premium plan with pre-warmed instances on a schedule, allowing you to optimize for the best cost.

Durable Function that moves any function between the Consumption and Premium plan with pre-warmed instances on a schedule.

Connect Functions to VNET

The Premium plan allows dynamic scaling functions to connect to a VNET and securely access resources in a private network. This feature was previously only available by running Functions in an App Service Plan or App Service Environment, and is now available in a dynamically scaling model by using the Premium plan. Read more about VNET integration.

Pre-warmed Instances let you avoid cold start

With the Functions Premium plan we are offering a solution to the delay when calling a serverless application for the first time: pre-warmed instances. This delay is commonly referred as cold start, and it’s one of the most common problems amongst serverless developers. For more details on what cold start is and why it happens please refer to the blog post, “Understanding serverless cold start.”

In the Premium plan, we offer you the ability to specify a number of pre-warmed instances that are kept warm with your code ready to execute. When your application needs to scale, it first uses a pre-warmed instance with no cold start. Your app immediately pre-warms another instance in the background to replenish the buffer of pre-warmed instances. This model allows you to avoid any delay on the execution for the first request to an idle app, and also at each scaling point.

Today we only allow one pre-warmed instance per site, but we expect to open that up to higher numbers in the following weeks.

Keeping a pool of pre-warmed instances to scale into is one of the core advantages beyond existing workarounds. Today in the Consumption plan many developers work around cold start by implementing a “pinger” to constantly ping their application to keep it warm. While this does work for the first request, apps with pingers will still experience cold start as they scale out, since the new instances pulled to run the application won’t be ready to execute the code immediately. We always keep the number of pre-warmed instances you’ve requested ready as a buffer, so you’ll never see cold-start delays so long as you’re scaling slower than we can warm up instances.

Try it out and learn more!

The Azure Functions Premium plan is available in preview today to try out! Here’s what you can do to learn more about it:

Hybrid enterprise serverless video thumbnail


Extending Azure Security Center capabilities

$
0
0

This blog post was co-authored by Ron Matchoro, Principal Program Manager, Ronit Reger, Senior Program Manager, Miri Landau, Senior Program Manager, and Devendra Tiwari, Principal PM Manager, Azure Security Center.

As more organizations are delivering innovation faster by moving their businesses to the cloud, increased security is critically important for every industry. Azure has built-in security controls across data, applications, compute, networking, identity, threat protection, and security management so you can customize protection and integrate partner solutions. Microsoft Azure Security Center is the central hub for monitoring and protecting against related incidents within Azure. 

We love making Azure Security Center richer for our customers, and we are excited to share exciting updates this week at Hannover Messe 2019. We are excited to announce that Advanced Threat Protection for Azure Storage, the Regulatory Compliance Dashboard, Dedicated Hardware Security Module Service (HMS) in UK, Canada, and Australia, Azure disk encryption support for Virtual Machine Scale Sets (VMSS), and support for virtual machine sets are now generally available as part of Azure Security Center.

Advanced Threat Protection for Azure Storage is now generally available

Advanced Threat Protection for Azure Storage helps customers detect and respond to potential threats on their storage account as they occur. This layer of protection allows you to protect and address concerns without needing to be an expert in security. Enabling it is quick and simple. Once enabled, security alerts are triggered when suspicious activity occurs and you can view them listed in Azure Security Center. Security alerts provide details of suspicious activity that was detected and recommended actions to take to investigate and mitigate the potential threat.

The benefits of Advanced Threat Protection for Azure Storage includes:

  • Detection of anomalous access and data exfiltration activities.
  • Email alerts with actionable investigation and remediation steps.
  • Centralized views of alerts for the entire Azure tenant using Azure Security Center.
  • Easy enablement for many storage accounts using the Azure portal, Azure Policy, or Standard Azure APIs.

To learn more, refer to the documentation, “Advanced Threat Protection for Azure Storage,” or the Azure Security Center pricing page.

Regulatory compliance dashboard in Azure Security Center is generally available

We are pleased to announce that the regulatory compliance dashboard in Azure Security Center is now generally available! The dashboard helps Security Center customers streamline their compliance process by providing insight into their compliance posture for a set of supported standards and regulations.

The compliance dashboard surfaces security assessments and recommendations as they align to specific compliance requirements based on continuous assessments of your Azure and hybrid environments. The dashboard also provides actionable information for how to act on recommendations and reduce risk factors in your environment, and thus improve your overall compliance posture.

Regulatory Compliance dashboard in Azure Security Center

The information provided by the regulatory compliance dashboard can be very useful for providing evidence to internal and external auditors on your compliance status with the supported standards. To further facilitate this, you can now generate and download a compliance report directly from the compliance dashboard. The report can be generated for a particular supported compliance standard and depicts a high-level summary of your current compliance status with respect to that standard. In addition, you can now automate compliance processes and manage them at scale using programmatic APIs.

To learn more about regulatory compliance in Azure Security Center, visit the documentation, “Tutorial: Improve your regulatory compliance.”

Azure Security Center now supports Virtual Machine Scale Sets

Security Center can now protect your Virtual Machine Scale Sets. You can easily monitor the security posture of your VM Scale Sets with security recommendations to increase overall security, reduce vulnerabilities, and detect threats with Security Center’s advanced threat detection capabilities.

Security Center automatically discovers your VM Scales Sets and recommends that you install the monitoring agent to get better security assessments and enable events-based threat detection.

Security Center automatically discovers your VM Scales Sets and recommends that you install the monitoring agent.

You can view the security health and recommendations of each VM scale set: Security health and recommendations of each virtual machine scale set

For every VM scale set instance, you can benefit from a list of recommendations such as:

  • Install the monitoring agent 
  • Remediate vulnerabilities in security configuration 
  • Remediate endpoint protection health failures 
  • Install endpoint protection solution on virtual machine scale sets
  • Install system updates 
  • Enable diagnostics logs in Virtual Machine Scale Sets’

Threat detection alerts are also available for VM scale sets instances for any VM protected by Security Center standard tier. To learn more on VM Scale Set support.

Note: Pricing of VM scale sets instances is the same as VM. For detailed information visit our pricing page.

Announcing Azure Dedicated HSM service availability in UK, Canada, and Australia regions

The Azure Dedicated Hardware Security Module (HSM) service provides cryptographic key storage in Azure and meets the most stringent customer security and compliance requirements. This service is the ideal solution for customers requiring FIPS 140-2 Level 3 validated devices and complete, exclusive control of the HSM appliance. The Dedicated HSM service uses SafeNet Luna Network HSM 7 devices from Gemalto. This device offers the highest levels of performance and cryptographic integration options and makes it simple for you to migrate HSM-protected applications to Azure. The Azure Dedicated HSM is leased on a single-tenant basis.

The Azure Dedicated HSM service was originally announced in 8 Azure public regions on November 28, 2018 and we are now pleased to announce that the service is expanded to the UK, Canada, and Australia. With this new announcement, the Dedicated HSM service is now available in 14 regions namely, East US, West US, South Central US, East US 2, Southeast Asia, East Asia, West Europe, North Europe, UK South, UK West, Canada Central, Canada East, Australia East, and Australia Southwest regions. We plan to continue expanding this service to other Azure regions.

Azure Dedicated Hardware Security Module

  • To learn about the Dedicated HSM service availability announcement, please refer to blog post, “Announcing Azure Dedicated HSM availability.”
  • To learn more about the Azure Dedicated HSM service, please refer to the service documentation.
  • To learn about pricing and suitability of this service for your applications, please contact your Microsoft Account representative.

Announcing Azure Disk Encryption general availability for Virtual Machine Scale Sets

Today, we are excited to announce the general availability of Azure Disk Encryption (ADE) for Virtual Machine Scale Sets (VMSS). With this announcement, Azure disk encryption can be enabled for Windows and Linux Virtual Machine Scale Sets in Azure public regions. This enables customers to help protect and safeguard the Virtual Machine Scale Sets data at rest using industry standard encryption technology.

Azure Disk Encryption is a capability that helps you encrypt your Windows and Linux IaaS Virtual Machine Scale Sets disks. Disk Encryption leverages the industry standard BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption of disks. The solution is integrated with Azure Key Vault to help you control and manage the disk-encryption keys and secrets. The solution also ensures that all data on the VM disks are encrypted at rest in your Azure Storage.

The solution is deployed in all Azure public regions. Additional details on supported and unsupported scenarios, interfaces, and how you can use the disk encryption technology to encrypt your Virtual Machine Scale Sets and validate your scenarios is documented below.

Supported scenarios

  1. Virtual Machine Scale Sets encryption is supported only for scale sets created with managed disks, and not supported for native (or unmanaged) disk scale sets.
  2. Virtual Machine Scale Sets encryption is supported for OS and Data volumes for Windows Virtual Machine Scale Sets.
  3. Disable encryption is supported for OS and data volumes for Windows Virtual Machine Scale Sets.
  4. Virtual Machine Scale Sets encryption is supported for data volume for Linux Virtual Machine Scale Sets. Disable encryption is supported for data volumes for Linux Virtual Machine Scale Sets.
  5. Virtual Machine Scale Sets reimage and upgrade operations are supported.
  6. The key vault to safeguard the encryption must be provisioned with the right access policies in the same subscription and same region as the Virtual Machine Scale Sets.

Unsupported scenarios

  1. Virtual Machine Scale Sets encryption is not supported for scale sets created with native (or unmanaged) disk.
  2. Virtual Machine Scale Sets encryption is not supported for OS volume for Linux Virtual Machine Scale Sets encryption.

For additional details on Azure Disk Encryption support for Virtual Machine Scale Sets, refer to the below ADE documentation:

We continue to invest in Azure Security Center where you can easily get a unified view of security across all your on-premises and cloud workloads, continuously monitor the security of your machines, networks, and Azure services, and use advanced analytics and the Microsoft Intelligent Security Graph to get an edge over evolving cyber-attacks. To try Security Center’s new capabilities, please visit the Azure Security Center homepage. As always, for any feedback or additional information contact our team at SecurityCenter@microsoft.com.

Learn how Microsoft partners are building a sustainable future at Hannover Messe 2019.

Windows Server 2019 support now available for Windows Containers on Azure App Service

$
0
0

The Azure App Service engineering team is always striving to improve the efficiency and overall performance of applications on our platform. Today, we are happy to announce Windows Server 2019 Container support in public preview.

To our customers, this expanded support translates into clear efficiencies:

  • Reduced container size enables you to be more cost effective by running more applications/slots within your App Service Plan. For example, the Windows Server Core 2019 LTSC base image is 4.28 GB compared to the Windows Server Core 2016 LTSC image is 11GB, which equates to a decrease of 61 percent!
  • You will benefit from faster startup time for your application because the container images will be smaller.

The container hosts have been updated to support Windows Server 2019, which means we can now support Windows Containers based on:

  • Windows Server Core 2019 LTSC
  • Windows Server Nano 1809
  • Windows Server Core 2016 1803
  • Windows Server Core 2016 1709
  • Windows Server Core 2016 LTSC

Windows Container support is available in our West US, East US, West Europe, North Europe, East Asia, and East Australia regions. Windows Containers are not supported in App Service Environments at present.

Faster app startup times with new, cache-based images

App Service caches several base images and we advise customers to use those images as the base of their containers to enable faster application startup times. Customers are free to use their own base images, though using non-cached base images will lead to longer application startup times.

Customers deploying .NET Framework Applications must choose a base image based on the Windows Server Core 2019 Long Term Servicing Channel release or older, and customers deploying .NET Core Applications must choose a base image based on Windows Server Nano 1809.

Cached base images:

  • mcr.microsoft.com/dotnet/framework/aspnet:4.7.2-windowsservercore-ltsc2019
  • mcr.microsoft.com/windows/nanoserver:1809

Resources

We want to hear from you!

Windows Container support for Azure App Service provides you with even more ways to build, migrate, deploy, and scale enterprise grade web and API applications running on the Windows platform. We are planning to add even more capabilities during the public preview and are very interested in your feedback as we move towards general availability.

Unlock dedicated resources and enterprise features by migrating to Service Bus Premium

$
0
0

Azure Service Bus has been the Messaging as a Service (MaaS) option of choice for our enterprise customers. We’ve seen tremendous growth to our customer base and usage of the existing namespaces, which inspires us to bring more features to the service.

We recently expanded Azure Service Bus to support all Azure regions with Availability Zones to help our customers build more resilient solutions. We also expanded the Azure Service Bus Premium tier to more regions to enable our customers to leverage many enterprise ready features on their Azure Service Bus namespaces while also being closer to their customers.

The Azure Service Bus Premium tier is a relatively newer offering, made generally available in September 2015, that allows our customers to provision dedicated resources for their Azure Service Bus namespaces. This in turn provides reliable throughput and predictable latency, along with production and enterprise ready features at a fixed price per Messaging Unit. This is a major improvement from the Azure Service Bus Standard tier that is a multi-tenant system optimized for lower throughput systems using a pay-as-you-go model.

Our Azure Service Bus Premium tier offering has resonated well with the customers, who have been excited to get onboard to enjoy the value add that it provides. However, until now, we haven’t had a way to upgrade the existing Azure Service Bus Standard namespaces to the Premium tier. That is now about to change.

Today, we’re happy to announce tools, both on the Azure portal and via the Command Line Interface/PowerShell that enables our customers to upgrade their existing Azure Service Bus Standard namespaces to the Premium tier. This tooling will ensure that no configuration changes are required on the sender and receiver applications, while enabling our customers to adopt the best offering for their use case, with minimal downtime.

Migrate to premium in Azure Service Bus Standard

To know more about this feature and the finer details on what is happening under the hood, please read the documentation.

You can access the portal tool by clicking on the “Migrate to Premium” menu option on the left navigation pane under the Service Bus Standard namespace that you are looking to migrate.

Event-driven Java with Spring Cloud Stream Binder for Azure Event Hubs

$
0
0

Spring Cloud Stream Binder for Azure Event Hubs is now generally available. It is simple to build highly scalable event-driven Java apps using Spring Cloud Stream with Event Hubs, a fully managed, real-time data ingestion service on Azure that is resilient and reliable service for any situation. This includes emergencies, thanks to its geo-disaster recovery and geo-replication features.

Spring Cloud Stream provides a binder abstraction for popular message broker implementations. It provides a flexible programming model built on already established and familiar Spring idioms and best practices, including support for persistent pub/sub semantics, consumer groups, and stateful partitions. Now, developers can use the same patterns for building Java apps with Event Hubs.

Azure Event Hubs and Spring Apps graphic

Getting started 

Check out the tutorial, “How to create a Spring Cloud Stream Binder application with Azure Event Hubs,” and build a Java-based Spring Cloud Stream Binder application using the Spring Boot Initializer with Azure Event Hubs. Go to the Azure portal and create a new Event Hubs namespace. Add the following Maven dependency into your Java project. 

<dependency>
     <groupId>com.microsoft.azure</groupId>
     <artifactId>spring-cloud-azure-eventhubs-stream-binder</artifactId>
     <version>1.1.0.RC4</version>
</dependency>

Publish messages

Use @EnableBinding(Source.class) to annotate a source class and publish messages to Event Hubs with Spring Cloud Stream patterns. You can customize the output channel for the Source with configurations.

  • Destination: Specify which Event Hub to connect with the output channel.
  • Sync/Async: Specify the mode to produce the messages.

Subscribe to messages 

Use @EnableBinding(Sink.class) to annotate a sink class and consume messages from Event Hubs. You can also customize the input channel with configurations. For the full list, please refer to the documentation, “How to create a Spring Cloud Stream Binder application with Azure Event Hubs.”

  • Destination: Specify an Event Hub to bind with the input channel.
  • Customer Group: Specify a Consumer Group to receive messages.

Try building event-driven Java apps using Spring Cloud Stream Binder for Event Hubs 

Try out a Java app using Spring Cloud Stream Binder on Azure Event Hubs and let us know what you think via email or comments below.

Additional resources

Configure Visual Studio across your organization with .vsconfig

$
0
0

As application requirements grow more complex, so do our solutions. Keeping developers’ environments configured across our organizations grows equally complex. Developers need to install specific workloads and components in order to build a solution. Some organizations add these requirements to their README or CONTRIBUTING documents in their repositories. Some organizations might publish these requirements in documents for new hires or even just forward emails. Configuring your development environment often becomes a day-long chore. What’s really needed is a declarative authoring model that just configures Visual Studio like you need it.

In Visual Studio 2017 Update 15.9 we added the ability to export and import workload and component selection to a Visual Studio installation configuration file. Developers can import these files into new or existing installations. Checking these files into your source repos makes them easy to share. However, developers still need to import these to get the features they need.

Automatically install missing components

New in Visual Studio 2019: you can save these files as .vsconfig files in your solution root directory and when the solution (or solution directory) is opened, Visual Studio will automatically detect which components are missing and prompt you to install them.

Installing missing components

You can find an example of this in the vswhere repo on GitHub. When you click the Install link, you’re prompted to install any missing components. You can click the View full installation details link if you’d like to select additional components.

Install missing componentsInstalling missing components

Exporting your configuration

In Visual Studio 2019, you can create a .vsconfig file right from Solution Explorer:

  1. Right-click on your solution.
  2. Click Add > Installation Configuration File
  3. Confirm the location where you want to save the .vsconfig file (defaults to your solution root directory).
  4. Click Review details
  5. Select or deselect any changes you want to make and click Export

To help keep the installation footprint minimal, only export those components you know you need to build, test, and possibly publish your solution. One way you can do this is to install a second instance of Visual Studio or install into a virtual machine, add those workloads and optional components you know are necessary, and build and test your solution. Add components as needed until the solution builds successfully, then export your configuration.

Feedback

We love to hear your feedback! You can report a problem or make suggestions for this or any other feature in Visual Studio on our Developer Community site.

 

The post Configure Visual Studio across your organization with .vsconfig appeared first on Visual Studio Setup.

Introducing Time Travel Debugging for Visual Studio Enterprise 2019

$
0
0

The Time Travel Debugging (TTD) preview in Visual Studio Enterprise 2019 provides the ability to record a Web app running on a Azure Virtual Machine (VM) and then accurately reconstruct and replay the execution path. TTD integrates with our Snapshot Debugger offering and allows you to rewind and replay each line of code however many times you want, helping you isolate and identify problems that might only occur in production environments.

The most effective type of debugging often occurs in what we call the “inner loop”. While you’re in the act of reviewing and debugging code locally, before you’ve pushed to version control. The problems we encounter during inner loop development are usually easier to understand and diagnose because they are accessible and repeatable.

Today, we’re excited to announce the release of the Time Travel Debugging (TTD) in Visual Studio Enterprise. With TTD, we are giving you the power to record code executed in production and replay the execution path inside Visual Studio. TTD also gives you the ability to move forward and backwards in time as if you were performing “inner loop” debugging locally. You also get access to important debugging features like locals and the call stack.

Today’s debuggers typically allow you to stop at a specific breakpoint by halting the entire process and then only move forward. Even with a more advanced debugging tool like IntelliTrace, you record events and data at discrete moments in time. TTD has a significant advantage over snapshots, logging or crash dump files, as these methods are generally missing the exact details of the execution path that led up to the final failure or bug.

What is the Time Travel Debugging?

Time Travel Debugging (TTD) is a reverse debugging solution that allows you to record the execution of code in an app or process and replay it both forwards and backwards. TTD improves debugging since you can go back in time to better understand the conditions that lead up to a specific bug. Additionally, you can replay it multiple times to understand how best to fix the problem. TTD technology was recently introduced in a preview version of WinDbg for native code scenarios.

We have extended the Snapshot Debugger with TTD to allow you to record your applications as it executes. That recording can then be played back in Visual Studio 2019 Enterprise where you can rewind and replay each line of code as many times as you want. TTD records on the thread that matches the snappoint conditions and will generally run until the end of the method. If there is an “await” after the snappoint but before the end of the method, we will stop recording where the await occurs. This feature will be in preview for the release of Visual Studio 2019 with a go live license. We plan to add more TTD scenarios in future updates.

Getting started with TTD

The Time Travel Debugging preview can be enabled in the latest version of Visual Studio Enterprise 2019 for Azure Virtual Machines on the Windows OS running ASP.NET (4.8+).

After installing the latest version of Visual Studio Enterprise, complete the following steps:

1. Open the project you would like to Time Travel Debug – ensure that you have the same version of source code that is published to your Azure Virtual Machine.

2. Choose Debug > Attach Snapshot Debugger and select the Azure Virtual Machine your project is deployed to along with an Azure storage account. You will be required to install the Snapshot Debugger site extension the first time an attach is attempted.

3. Select the Time Travel Debugging option and then click Attach. Once Visual Studio is in Snapshot Debugger mode it will be capable of recording using TTD.

4. Create a snappoint and configure it to enable time travel debugging. Click StartUpdate Collection.

5. Once your Snapshot has been collected click on View Snapshot and you can use the command bar to step forwards and backwards within the recorded method.

TTD preview limitations

During the initial preview stage of TTD we will be supporting AMD64 Web apps running on a Azure Virtual Machine (VM). We expect that recording will add significant overhead to your running process, slowing it down based on process size and the number of active threads. We also anticipate a degraded debugging experiences in some of the following scenarios: –

  • During a GC compacting phase.
  • Stepping through an optimized method e.g. when you step into a method that does not contain a snappoint.
  • If your application internally loads or unloads app domains.
  • Recording only occurs on the thread that was triggered by the snappoint, code that subsequently impacts alternate threads will also be degraded.

Please Note: we will also not record the async causality chains.

During preview testing we found that the TTD file sizes ranged from several hundred megabytes up to several gigabytes depending on how long your session lasts and how long the web app runs. However, files created by TTD will be cleaned up once the Snapshot Debugger session ends, and an app pool recycle is initiated. For our preview release we also recommend using a VM with a minimum of 8GB RAM.

Try out TTD now!

We are incredibly excited about how this preview feature can help enhance your debugging experiences in Azure, but this is just the beginning. Our team continues to design and build additional TTD capabilities that we plan to add in upcoming Visual Studio releases.

We are counting on your feedback via our Developer Community and the Feedback Hub, you can help us prioritize what improvements to make because we genuinely value all the responses you provide.

The post Introducing Time Travel Debugging for Visual Studio Enterprise 2019 appeared first on The Visual Studio Blog.

Windows Template Studio 3.1 released!

$
0
0

We’re extremely excited to announce the Windows Template Studio 3.1!

As always, we love how the community is helping. If you’re interested, please head over to head over to WinTS’s Github.

What’s new:

Full list of adjustments in the 3.1 release, WinTS’s Github has a full changelog.

Screenshot of Windows Template Studio

Included in this version:

  • New MenuBar project type
  • Added UI, Unit tests and WinAppDriver testing features
  • Full VS2019 support
  • Added link to report issue from WinTS Wizard
  • Bug fixes

Dev platform updates:

  • AppCenter.Analytics to 1.14.0
  • AppCenter.Crashes to 1.14.0
  • NETCore.UniversalWindowsPlatform to 6.2.8
  • Toolkit.Uwp to 5.1.1
  • UI.Xaml to 2.0.181018004
  • UI.for.UniversalWindowsPlatform to 1.0.1.4
  • NUnit3TestAdapter to 3.13.0
  • NET.Test.Sdk to 16.1.0
  • Store Engagement SDK to 10.1901.28001

How to get the update:

There are two paths to update to the newest build.

  • Already installed: Visual Studio should auto update the extension. To force an update, Go to Tools->Extensions and Updates. Then go to Update expander on the left and you should see Windows Template Studio in there and click “Update.”
  • Not installed: Head to https://aka.ms/wtsinstall, click “download” and double click the VSIX installer.

What else is cooking for next versions?

We love all the community support and participation. In addition, here are just a few of the things we are currently building out that will in future builds:

  • Identity Login (ETA is v3.1.5 in a few weeks). We wanted to include this in 3.1 but it isn’t quite ready. We want to cross our T’s and dot our i’s.
  • Azure features starting to be added
  • Database support

With partnership with the community, we’ve will continue cranking out and iterating new features and functionality. We’re always looking for additional people to help out and if you’re interested, please head to our GitHub at https://aka.ms/wts. If you have an idea or feature request, please make the request!

The post Windows Template Studio 3.1 released! appeared first on Windows Developer Blog.


Database administrators, discover gold in the cloud

$
0
0

Data is referred to these days as “the new oil” or “black gold” of industry. If the typical Fortune 100 company gains access to a mere 10 percent more of their data, that can result in increased revenue of millions of dollars.

In charge of all this data is the database administrator (DBA). I’ve spent a majority of my technical career in this role and, as immensely rewarding as it was, I was consistently finding ways to automate what I found tedious, as well as acquiring new skills to provide more value to the business. Although IT organizations traditionally look to DBAs to create databases, grant access, back up the data, and the like, many of these manual tasks are now automated in the cloud—leaving many DBAs asking what the future may hold for them. The great news is, we are on the cusp of a revolution in data, and the role of the database administrator is at the forefront of this movement.

Recently, my team discovered new technology that enables us to do more with less—like agile development helping us deploy new features and software faster to market, and DevOps ensuring it was done with less impact to mission-critical systems. My previous DBA role’s scripting and detail-oriented skills came into play at every corner, empowering me to provide value in ways I’d never imagined. The scale of the cloud also removes an infamous bottleneck—the “project slow down.” Where previously we’d wait for acquisition of hardware and on-premises resources, the deployment of hardware and software is just a click or script away in Microsoft Azure. Last but not least, this new world of cloud and DevOps gives us the space to provide more value to the business in the way of code reviews, ensuring what goes to production is high quality and performs well.

With cloud accounts, I’m able to test out and learn new features that I rarely had access to before, including product capabilities running in the cloud long before they’re made available in on-premises releases. We’re no longer spending time after-hours patching and upgrading databases, as these monotonous tasks are done for us, allowing for rested, fully engaged DBAs, ready to take on this new frontier.

As I’m a DBA satisfied with leaving all the tedious tasks behind me and embrace the new world of cloud—the question is now posed to you, my fellow data guardians. What will the future hold for you?

If you’re not there yet and want to know how to get started, I’m here to help. Attend a free webinar where I’ll be sharing more on the many advantages of managing data in the cloud, and how your company’s “black gold” will make you tomorrow’s data hero.

Sign up today

Azure Media Services: The latest Video Indexer updates from NAB Show 2019

$
0
0

After sweeping up multiple awards with the general availability release of Azure Media Services’ Video Indexer, including the 2018 IABM for innovation in content management and the prestigious Peter Wayne award, the team has remained focused on building a wealth of new features and models to allow any organization with a large archive of media content to unlock insights from their content; and use those insights improve searchability, enable new user scenarios and accessibility, and open new monetization opportunities.

At NAB Show 2019, we are proud to announce a wealth of new enhancements to Video indexer’s models and experiences, including:

  • A new AI-based editor that allows you to create new content from existing media within minutes
  • Enhancements to our custom people recognition, including central management of models and the ability to train models from images
  • Language model training based on transcript edits, allowing you to effectively improve your language model to include your industry-specific terms
  • New scene segmentation model (preview)
  • New ending rolling credits detection models
  • Availability in 9 different regions worldwide
  • ISO 27001, ISO 27018, SOC 1,2,3, HiTRUST, FedRAMP, HIPAA, and PCI certifications
  • Ability to take your data and trained models with you when moving from trial to paid Video Indexer account

More about all of those great additions in this blog.

In addition, we have exciting news for customers who are using our live streaming platform for ingesting live feeds, transcoding, and dynamically packaging and encrypting it for delivery via industry-standard protocols like HLS and MPEG-DASH. Live transcriptions is a new feature in our v3 APIs, wherein you can enhance the streams delivered to your viewers with machine-generate text that is transcribed from spoken words in the video stream. This text will initially be only delivered as IMSC1.1 compatible TTML packaged in MPEG-4 Part 30 (ISO/IEC 14496-30) fragments, which can be played back via a new build of Azure Media Player. More information on this feature, and the private preview program, is available in the documentation, “Live transricption with Azure Media Services v3.”

We are also announcing two more private preview programs for multi-language transcription and animation detection, where selected customers will be able to influence the models and experiences around them. Come talk to us at NAB Show or contact your account manager to request to be added to these exciting programs!

Extracting fresh content from your media archive has never been easier

One of the ways to use deep insights from media files is to create new media from existing content. This can be to create movie highlights for trailers, use old clips of videos in news casts, create shorter content for social media, or for any other business need.

In order to facilitate this scenario with just a few clicks, we created an AI-based editor that enables you to find the right media content, locate the parts that you’re interested in, and use those to create an entirely new video, using the metadata generated by Video Indexer. Once you’re happy with the result, it can be rendered and downloaded from Video Indexer and used in your own editing applications or downstream workflows.

Video indexer with Satya Nadella

All these capabilities are also available through our updated REST API. This means that you can write code that creates clips automatically based on insights. The new editor API calls are currently open to public preview.

Want to give the new AI-based editor a try? Simply go to one of your indexed media files and click the “Open in editor” button to start creating new content.

More intuitive model customization and management

Video Indexer comes with a rich set of out-of-the-box models so you can upload your content and get insights immediately. However, AI technology always gets more accurate when you customize it to the specific content it’s employed for. Therefore, Video Indexer provides simple customization capabilities for selected models. One such customization is the ability to add custom persons models to the over 1 million celebrities that Video Indexer can currently identify out-of-the-box. This customization capability already existed in the form of training “unknown” people in the content of a video, but we received multiple customer requests to enhance it even more - so we did!

To accommodate an easy customization process for persons models, we added a central people recognition management page that allows you to create multiple custom persons models per account, each of which can hold up to 1 million different entries. From this location you can create new models, add new people to existing models, review, rename, and delete them if needed. On top of that, you can now train models based on your static images even before you have uploaded the first video to your account. Organizations that already have an archive of people images can now use those archives to pre-train their models. It’s as simple as dragging and dropping the relevant images to the person name, or submitting them via the Video Indexer REST API (currently in preview).

Person's details

What to learn more? Read about our advanced custom face recognition options.

Another important customization is the ability to train language models to your organization’s terminologies or industry specific vocabulary. To allow you to improve the transcription for your organization faster, Video Indexer now automatically collects transcript edits done manually into a new entry in the specific language model you use. All you need to do then, is click the “Train” button to add those to your own customized model. The idea is to create a feedback loop where organizations begin with a base out-of-the-box language model and improve the accuracy of it through manual edits over a period of time until it aligns to their specific industry vertical vocabulary and terms.

Timeline in Video Indexer

New additions to the Video Indexer pipeline

One of the primary benefits of Video Indexer is having one pipeline that orchestrates multiple insights from different channels into one timeline. We regularly work to enrich this pipeline with additional insights.

One of the latest additions to Video Indexer’s set of insights is the ability to segment the video by semantic scenes (currently in preview) based on visual cues. Semantic scenes add another level of granularity to the existing shot detection and keyframes extraction models in Video Indexer and aim to depict a single event composed of a series of consecutive shots which are semantically related.

Scenes can be used to group together a set of insights and refer to them as insights of the same context in order to deduct a more complex meaning from them. For example, if a scene includes an airplane, a runway, and luggage, the customer can build logic that deducts that it is happening in an airport. Scenes can also be used as a unit to be extracted as a clip from a full video.

Scenes in Video Indexer

Another cool addition to Video Indexer is the ability to identify ending rolling credits of a movie or a TV show. This can come in handy for a broadcasters in order to identify when their viewers completed watching the video and in order to identify the right moment to recommend the new show or movie to watch before losing the audience.

Video Indexer runs on trust (and in more regions)

As Video Indexer is part of the Azure Media Services family and is built to serve organizations of all sizes and industries, it is critical to us to help our customers meet their compliance obligations across regulated industries and markets worldwide. As part of that effort, we are excited to announce that Video Indexer is now ISO 27001, ISO 27018, SOC 1,2,3, HIPAA, FedRAMP, PCI, and HITRUST certified. Learn more about the most current certifications status of Video Indexer and all other Azure services.

Additionally, we increased our service availability around the world and are now deployed to 9 regions for your convenience. Available regions now include East US, East US 2, South Central US, West US 2, North Europe, West Europe, Southeast Asia, East Asia, and Australia East. More regions are coming online soon, so stay tuned. You can always find the latest regional availability of Video Indexer by visiting the products by region page.

Video Indexer continues to be fully available for trial on East US. This allows organizations to evaluate the full Video Indexer functionality on their own data before creating a paid account using their own Azure subscription. Once organizations decide to move to their Azure subscription, they can copy all of the videos and model customizations that they created on their trial account by simply checking the relevant check box in the content of the account creation wizard.

Connect Video Indexer to an Azure subscription

Want to be the first to try out our newest capabilities?

Today, we are excited to announce three private preview programs for features that we have been asked for by many different customers.

Live transcription – the ability to stream a live event, where spoken words in the audio is transcribed to text and delivered along with video and audio.

Mixed languages transcription – The ability to automatically identify multiple spoken languages in one video file and to create a mixed language transcription for that file.

Animation characters detection – The ability to identify characters in animated content as if they were real live people!

We will be selecting a set of customers out of a list of those who would like to be our design partners for these new capabilities. Selected customers will be able to highly influence these new capabilities and get models that are highly tuned to their data and organizational flows. Want to be a part of this? Come visit us at NAB Show or contact your account manager for more details!

Visit us at NAB Show 2019

If you are attending NAB Show 2019, please stop by booth #SL6716 to see the latest Azure Media Services innovations! We’d love to meet you, learn more about what you’re building, and walk you through the different innovations Azure Media Services and our partners are releasing at NAB Show. We will also have product presentations in the booth throughout the show.

Have questions or feedback? We would love to hear from you! Use our UserVoice to help us prioritize features, or email VISupport@Microsoft.com for any questions.

Web application firewall at Azure Front Door service

$
0
0

You have a great web application, and users from all over the world love it. Well, so do malicious attackers. Cyber-attacks grow each year in frequency and sophistication, and being unprotected against them exposes you to the risks of service interruptions, data loss, and tarnished reputation.

We have heard from many of you that security is a top priority when moving web applications onto the cloud. Today, we are very excited to announce our public preview of the Web Application Firewall (WAF) for the Azure Front Door service.  By combining the global application and content delivery network with natively integrated WAF engine, we now offer a highly available platform helping you deliver your web applications to the world, secure and fast!

WAF with Front Door service leverages the scale of and the deep security investments we have made at the Azure edge, and it is designed to protect you from multiple attack vectors such as injection type attacks and volumetric DoS attacks. It inspects each incoming request at Azure’s network edge, stops unwanted traffic before they enter your backend servers, and offers protection at scale without sacrificing on performance. With WAF for Front Door, you have the option to fine tune access to your web application using custom rules that you define, as well as to enable a collection of security rules against common web application vulnerabilities packaged as Managed Rulesets. Furthermore, when you use WAF at Front Door, your security policy management is centralized and any changes you make are instantaneously propagated to all the Front Door edges.

A WAF policy is the building unit of WAF which defines the security posture for your web application. It can have two types of security rules: custom rules and a set of pre-configured rule groups known as a Managed Ruleset. Azure managed Default Rule Set is updated by Azure as needed to adapt to new attack signatures. If you have a cloud native, Internet-facing web application such as a web app hosted on Azure PaaS platform it is very simple to add Front Door with default WAF policy. Just a few clicks away, your web application is protected from common OWASP TOP 10 exploitations and with latency optimization offered by Front Door service.

Diagram of Web Application Firewall protecting Contoso's Web App

Figure 1 Protecting your Web App with WAF at Front Door


If you are like many of our customers who have compliance and BCDR requirements for your business-critical applications, you probably have your web applications hosted in multiple regions. WAF with Front Door offers centralized policy management and global load balancing supporting many routing options to your backends.

Diagram of Web Application Firewall (WAF) protecting your Web Apps within your organization with multiple regions

Figure 2 Protecting your multi-region web application with WAF at Front Door

WAF with Front Door can protect backends hosted on Azure as well as these that are hosted on other clouds or on Premise. You may further lock down your backends to allow only traffic from Front Door and deny direct access from the Internet. WAF at Front Door allows granular access and rate control via custom rules. You may create custom rules along the following dimensions:

  • IP allow list and block list: control access to your web applications based on list of client IP addresses or IP address ranges. Both IPv4 and IPv6 are supported.
  • Geographic based access control: control access to your web applications based on a client’s country code.
  • HTTP parameters-based access control: control access to your web applications based on string matching of HTTP(S) request parameters such as query string, post args, request Uri, request header, and request body.
  • Request method-based access control: control access to your web applications based on HTTP request method such as Get, Put, and Head.
  • Size constraint: control access to your web applications based on the lengths of specific parts of a request such as query string, Uri, or request body.
  • Rate limiting rules: A rate control rule is to limit abnormal high traffic from any client IP. You may set a threshold on number of web requests allowed by a client IP during a one-minute duration. Rate can be combined with match conditions, for example, rate limit access to a specific Uri path.

WAF charges based on the number of WAF policies and rules you create, types of managed rule set you choose, and the number of web requests that you receive. During public preview, WAF at Front Door is free of charge.

As we continue to enhance Azure WAF offering, would love to hear your feedback. You can try Web Application Firewall with Front Door today using portal, ARM templates, or PowerShell. For more information, visit the detailed documentation for Web Application Firewall (WAF) for the Azure Front Door service.

Welcome to NAB Show 2019 from Microsoft Azure!

$
0
0

Putting the intelligent cloud to work for content creators, owners and storytellers.

Stories entertain us, make us laugh and cry, and are the lens through which we perceive our world. In that world, increasingly overloaded with information, they catch our attention and, if they catch our hearts, we engage. This makes stories powerful, and it’s why so many large technology companies are investing heavily in content – creating it and selling it.

At Microsoft, we’re not in the business of content creation.

Why? Our mission is to help every person and organization on the planet achieve more. So instead of creating or owning content, we want to provide platforms to help content creators and owners achieve more – from the Intelligent Cloud to the Intelligent Edge, with industry leading artificial intelligence (AI). We’re excited to see that mission come to life through customers such as Endemol Shine, Multichoice, RTL, Ericsson and partners like Avid, Akamai, Haivision, Pipeline FX and Verizon Digital Media Services. And we are excited to announce new Azure rendering, Azure Media Services, Video Indexer and Azure Networking capabilities to help you achieve more at NAB Show 2019. Cue scene.

Fix it in post: higher resolution, less time.

The arrival of HD led to an explosion of digital content. Today, not satisfied with even 4K resolution, the industry is moving inexorably toward 8K and beyond. With burgeoning immersive storytelling driving 360-degree / 3D content, high frame-rate, innumerable episodic and unscripted shows on fast release cycles and ever-more visually stunning cinematic features, data volumes are increasing exponentially.

Microsoft Azure stands ready with the storage acceleration and capacity to accept your most expansive projects. The new Azure FXT Edge Filer caching appliance delivers higher scalability and performance than ever before, with high-speed memory, SSD and support for Azure Blob storage. It’s a great fit for high-throughput, low-latency applications such as rendering where you need ultra-fast connections between on-premises storage and cloud compute capacity. We believe our Edge Filer appliances are a major differentiator for customers, and they agree – Avere vFXT for Azure has enabled visual effects studio Mr. X to recently render a feature-length film in Azure.

Azure FXT Edge Filer

Azure FXT Edge Filer

And speaking of rendering, our new Azure Render Farm Manager Portal preview makes it much faster and easier for customers to set up hybrid or cloud-only rendering environments in Azure, including networking setup and Azure storage options, with support for commonly used render farm managers such as PipelineFX Qube.

Whether it’s rendering, visual effects or editing, we offer the price and performance combination you need. And, watch this space and Avid’s newsroom for exciting announcements from Avid Connect 2019 regarding how we’re partnering to ingest, manage, edit and create content in the cloud.

Got content? Get storage. Add AI.

Your petabytes of content + our Azure Data Box or Data Boxy Heavy (in preview) = secure, enterprise-grade, cost-effective ingest at scale. Just getting off a shoot and have 10’s of terabytes? Meet Data Box Disk. The same benefits in a portable form-factor for smaller content stores. For those on set there is Data Box Edge, which can pre-process media (e.g., remove blank footage) and efficiently transfer it to the cloud through partners such as Dejero or a private high-bandwidth connection with Azure ExpressRoute Direct 100Gbps. We are also making our global network available to you – through Azure ExpressRoute Global Reach, which lets you effectively build your WAN on the Azure backbone. ExpressRoute Direct 100Gbps and Global Reach will be generally available as of NAB.

Once in the cloud, you can use Video Indexer’s award-winning AI capabilities to efficiently extract deep insights. Just in time for NAB, we’ve added an AI-based editor to help you generate fresh content in minutes, improved custom face and language models and additional certifications from ISO 27001 to FedRAMP. These new capabilities, and many more, easily integrate with your existing MAMs and can be used with any application to increase accessibility or create new OTT and monetization experiences.

Video Indexer (VI) is part of Azure Media Services (AMS), our hyper-scale, enterprise grade, productive media workflow solution. From ingest and transcoding to packaging and distribution, AMS – and our partners – have you covered. You can learn more about AMS, VI and our new private previews for animation, multi-language transcription and live transcriptions here.

Video Indexer Scene segmentation model

Video Indexer

Stream more content, more easily, to increasingly global audiences

Increasing audiences, form factors and globalization mean video workflows that are becoming more and more complex. Our partners are hard at work making this easier for you, and here are a few of the key announcements:

  • Akamai will directly connect its edge network with Azure through ExpressRoute to give customers higher performance and more predictable costs. It also plans to enhance the delivery of live and on-demand workflows with Azure Media Services and our mutual partners.
  • Verizon Digital Media Services is delivering an enterprise-grade streaming platform on Microsoft Azure to enhance video workflows.
  • Haivision’s new media routing cloud service, SRTHub, will help broadcasters more securely and reliably transport live video globally. SRTHub, built on Azure, will also streamline workflow orchestration using an open ecosystem of Hublets from industry-leading partners including Avid, Wowza and Epic Labs.
  • Telestream will bring its industry leading transcoding solution to Azure.

Delivering high-quality and highly available content and applications requires globally-scalable network solutions. To enable our customers to accelerate and deliver superior global applications, we’re announcing the GA of the Azure Front Door Service (AFD). AFD provides a global single point-of-entry that delivers optimized user experiences for web applications. AFD also includes an integrated web application firewall (WAF) and DDoS protection for securing those applications at the network edge.  

The next frontier

At NAB we’re showcasing how partners such as Zone TV and Nexx.TV are using Microsoft AI and Azure Cognitive Services to create more personalized content and improve monetization of existing media assets.  Stay tuned for more in this space as we work across Microsoft to put our data – and insights – to work for you.

Visit us at NAB Show 2019 – booth #SL6716 – to learn more, meet with the team and see what our partners have to offer. I hope to see you there – or out there in the real world – and look forward to hearing how we can put Azure to work for you.

Fast and optimized connectivity and delivery solutions on Azure

$
0
0

Azure Front Door, ExpressRoute Direct and Global Reach now generally available

Today I’m excited to announce the availability of innovative and industry leading Azure services that will help the attendees of NAB realize their future vision to deliver for their audiences - Azure Front Door Service (AFD), ExpressRoute Direct and Global Reach, as well as some cool new additions to both AFD and our Content Delivery Network (CDN).

This coming week, Microsoft will be at NAB Show 2019 in Las Vegas, bringing together an industry centered centered on the ablity to deliver richer content experiences or audienes around  the word. The media and entertainment industry will gather together for an in-depth view of the current, as well as the future of media technology and innovation, showcasing new and innovative cloud services to optimize and scale rich content experiences.

Bringing the media industry to the cloud has a tremendous impact on the entire content workflow; from production, post, delivery and IT operations, cloud services enable companies to scale their ability to innovate, create, and bring more content to market. This transformation however starts somewhere else; it starts with the most critical piece, which is the users or consumers of services.

Sample architecture of media content ingestion to delivery

Fig. 1 Sample architecture of media content ingestion to delivery

With the ever-increasing granularity of data, quality, volume and size of content consumed by an enormous number of users and devices, new customer needs and demands are emerging, and we recognize the massive amount of options and choices available to our customers today.   Shouldered on top of Microsoft’s global network, Azure seeks to provide the fastest and most optimized connectivity and delivery options to our customers for all parts of the media production and delivery workflow. 

Last year, driven by customers’ demand and their passion for pushing more data to Azure, we launched ExpressRoute Direct into preview in the fall of last year. Now generally available, ExpressRoute Direct provides 100 Gbps connectivity, which is the first service of its scale in public cloud and focuses on core scenarios around large data-ingestion, R&D, media services, graphics and the like.

Similarly, and also generally available today, we announced ExpressRoute Global Reach, extending the use of ExpressRoute from on-premises or from your corporate datacenter to Azure, to now also provide connectivity between on-premises sites, using the Microsoft Global network. Building new or extending existing ExpressRoute solutions with Direct and Global Reach, is a fast and flexible way to support multi-site collaboration centered around services, data and content stored in Azure. It is also a new option to compliment your existing connectivity/WAN/MPLS provider as a backup solution or provide the primary path where your service provider may not have the reach to deliver services locally.

At the same time, driven by our customers’ needs to drive rich, online application experiences, we launched the Azure Front Door Service into preview. Now generally available, Azure Front Door extends use of the global service that enables Microsoft’s global brands like Office 365, Bing, Teams, Azure DevOps and Xbox to build high performance, high availability, secure web applications. Now with Web Application Firewall (WAF) capabilities in public preview, Azure Front Door accelerates and secures your applications at the edge of Microsoft’s Global network.

 

“Electrolux is a global conglomerate of brands, selling more than 60 million products across 150 markets. Azure Front Door has enabled us to easily scale our service architecture and APIs to all our global developers and partners in the Wellbeing category.

It took us 10 minutes to set up global routing for our API services, using custom domains and own SSL certs.”

Andreas Larsson, Director of Engineering - Software Products

Electrolux logo

 

These new, innovative services enable you to quickly accelerate and optimize your end-to-end workflow in Azure.  Get started with these new services and the rest of Azure’s networking portfolio today and look for more new services coming soon. I encourage you to watch our newest video on hybrid networking options with Azure, as well as additional details and links to resources.

Watch the video, "Hybrid networking in Microsoft Azure" for an architectural overview and demo of hybrid connectivity options in Azure.

Putting a rich platform of infra and app services on top of a world-class global network (WAN) with the ability to connect, ingest, store and collaborate across shared data and content assets, makes a premiere toolbox for building and delivery modern applications and content.

Azure Front Door Service (AFD) the newest member of our application delivery portfolio, is now generally available. Since we launched this sophisticated tool in preview to customers last year, the interest and feedback have been amazing.

AFD enables customers to build applications that are truly global by ensuring fast, always-on and secure delivery of your web applications to services inside or outside of Azure. It provides a one stop solution for website acceleration, global HTTP/HTTPS load balancing, API fronting, SSL offload and now also WAF running at the edge of Microsoft’s global network. Improving customers’ application experiences and quality with Azure Front Door can dramatically influence end user behavior.  

Azure Front Door Service diagram

Today in preview, we are enabling a new and fully integrated web application firewall (WAF) with Azure Front Door. WAF at the edge, gives customers total control on access to media and applications. Customers can protect their web application from multiple attack vectors such as volumetric denial of service and targeted application exploits, by inspecting each incoming request at Azure’s network edge before they reach their service’s region.

WAF with Azure Front Door enables tuning access to web application using both custom rules in addition to turning on a collection of security rules, managed by Microsoft, against common web application vulnerabilities. We also allow a centralized security policy management that instantaneously propagating any changes you make to all the Front Door edges.

Launching AFD to enable our customers to build world-class web applications, is another great example of an enterprise grade service, battle-tested by years of constant support to Microsoft’s biggest businesses like Bing, Office 365, Xbox Live, MSN, and Azure DevOps, proving its mettle to deal with massive scale and high availability for business-critical applications. Get started with Azure Front Door Service for commerce sites, API routing, global websites, cloud migration scenarios. Learn more about the AFD announcement.

 

Azure CDN offers a true multi-CDN experience to deliver content to global or regional audiences, featuring 3 world class networks from Microsoft, Verizon and Akamai.

The unified platform, APIs, support and billing experience enables easy, fast setup and management of multiple CDN networks all in one place. Deep integration with Azure enables optimized experiences with Azure services and provides benefits whether your content is hosted in Azure or anywhere else.  

To further meet the increasing complexity of our customers CDN needs we’re excited to announce two new features of Azure CDN; root domain support and CDN managed certificates for Azure CDN from Akamai. Through integration with Azure DNS, root domain support is available across all providers in Azure CDN via DNS Alias records. This enables products that are using their root domain for their web sites, experiences or content to deliver that content through Azure CDN. In addition, managed custom domain certificates enable Azure CDN from Akamai customers now to turn on SSL on their custom domain with few clicks. Azure CDN also completely handles certificate management tasks such as procurement and renewal. 

With these and more upcoming improvements to Azure CDN we’re enabling our customers to customize how they leverage the combined footprint of Microsoft, Verizon, and Akamai to deliver content from our 1300+ (and growing!) points of presence around the world. Find more information on these new features and Azure CDN.

Find out more about Azure’s networking services through the links below.

Viewing all 5971 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>